Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for deploying k8s cluster into Digital Ocean #1059

Closed
pchico83 opened this issue Aug 27, 2014 · 19 comments
Closed

Support for deploying k8s cluster into Digital Ocean #1059

pchico83 opened this issue Aug 27, 2014 · 19 comments

Comments

@pchico83
Copy link

Due to lower price of droplets against other cloud providers, I guess Digital Ocean support would be interesting to quickly setup k8s clusters for testing, demo and play around.
I can take care of the PR since I am already playing with k8s in Digital Ocean.

@bgrant0607
Copy link
Member

A PR would be welcome.

@pchico83
Copy link
Author

Sounds good. I am considering to create a docker image and submit the Dockerfile in the PR in order to avoid the installation of the required packages in the local computer. Then, the RUN command would receive the (non-default) configuration as environment variables.
Is it ok or should I follow the same approach than other providers?

@brendandburns
Copy link
Contributor

You mean Dockerfiles for the various k8s components? These docker files already largely exist in the ./build/ directory. We're working toward containerizing all of the k8s components.

@pchico83
Copy link
Author

No, I mean a container that just launches the k8s cluster in Digital Ocean. I see the following advantages:

  • Less steps for the user: for example, for GCE, gcutil would be installed in the container that setups the k8s cluster and not in the user machine.
  • Less error prone as the cluster setup is done by the container and not in the user OS.

On the other hand, containerizing k8s components sounds very attractive to me, but I think it is independent from using a container that setups the k8s cluster.

@ngConsulti
Copy link

I'd be willing to help test...

@pchico83
Copy link
Author

@bgrant0607 now k8s uses IP-per-Pod, I guess the only option for Digital Ocean support is:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/ovs-networking.md

am I right?

@brendandburns
Copy link
Contributor

Have you looked at rudder from coreos? I think it should work too.

Brendan
On Sep 10, 2014 4:07 AM, "pchico83" notifications@github.com wrote:

@bgrant0607 https://github.com/bgrant0607 now k8s uses IP-per-Pod, I
guess the only option for Digital Ocean support is:

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/ovs-networking.md

am I right?


Reply to this email directly or view it on GitHub
#1059 (comment)
.

@pchico83
Copy link
Author

Is there any advantage of rudder over ovs or viceversa? Rudder is a little more documented.

@jbeda
Copy link
Contributor

jbeda commented Sep 11, 2014

Honestly, @pchico83, a lot of this stuff is so new and still developing that we I'm not sure how to weight on solution over another. Rudder should be easier to set up, I think but OVS might be faster. At some point Rudder may automate the set up of OVS/VxLAN where appropriate. I know that is something they are looking at.

/cc @eyakubovich if he wants to weigh in.

(To add to this, there is weave in the mix now too. I haven't played with weave and I don't think it is compatible with k8s out of the box as it wraps docker so it can mess with network namespaces.)

@pchico83
Copy link
Author

Thanks @jbeda. I will try Rudder first as it is proposed in https://github.com/bketelsen/coreos-kubernetes-digitalocean. I also want to play with the cluster setup because in the longer term I would like to implement a web project to deploy kubernetes clusters in different providers supporting different options.

@jbeda
Copy link
Contributor

jbeda commented Sep 11, 2014

Also check out #1153 as we are looking to untangle/rewrite the mess of shell scripts. That one is to rewrite kube-up in go.

@eyakubovich
Copy link

@pchico83 Rudder currently uses userspace encapsulation but we're actively looking into using VXLAN. If you're mostly playing around, the overhead of userspace should not be prohibitive. By the time this setup is production ready, we should have a kernel based data path.

If you need help setting it up, feel free to contact us on IRC (#coreos on freenode) or mailing list (https://groups.google.com/forum/#!forum/coreos-dev).

@pchico83
Copy link
Author

@jbeda Probably this is not the right moment, but should we consider to use rudder (or any other general solution) for all cloud provider implementations? This way, the cluster setup process would be more standard across providers. Ideally, I think that only the VMs provisioning process should be provider specific.
Also, do you know how Azure/Rackspace solve the ip-per-pod problem? I checked the implementation, and they don't seem to be using Rudder or OVS/VxLAN and, as far as I know, Azure/Rackspace don't have the GCE network capabilities.

@jbeda
Copy link
Contributor

jbeda commented Sep 12, 2014

/cc @thockin

I'd be in favor of moving to settle on one "network normalization" layer that handles the various techniques. Rudder isn't there yet but could be. Specifically, for GCE we really want to use the GCE advanced routing features as it is faster and works from all hosts (even those not configured to be part of the k8s cluster) on the network transparently.

Azure does an openvpn mesh (https://github.com/GoogleCloudPlatform/kubernetes/blob/732b7ce7ef975704a6e106a1e45066f2eca1adcb/cluster/saltbase/salt/top.sls#L30) and rackspace looks to use vxlan (https://github.com/GoogleCloudPlatform/kubernetes/blob/cbedf9f4702bbc3ebde1c68c2c4f7f6b050f61b4/cluster/rackspace/templates/salt-minion.sh#L41) but I can't say I understand all the details.

Both of these are probably replacable with rudder. We'd want to coordinate/confirm with those guys.

@thockin
Copy link
Member

thockin commented Sep 12, 2014

I'd be OK to standardize on a stack, if someone is up to the task of
maintaining the middle-layer. It sounds like this is where Rudder wants to
go...

On Fri, Sep 12, 2014 at 4:21 PM, Joe Beda notifications@github.com wrote:

/cc @thockin https://github.com/thockin

I'd be in favor of moving to settle on one "network normalization" layer
that handles the various techniques. Rudder isn't there yet but could be.
Specifically, for GCE we really want to use the GCE advanced routing
features as it is faster and works from all hosts (even those not
configured to be part of the k8s cluster) on the network transparently.

Azure does an openvpn mesh (
https://github.com/GoogleCloudPlatform/kubernetes/blob/732b7ce7ef975704a6e106a1e45066f2eca1adcb/cluster/saltbase/salt/top.sls#L30)
and rackspace looks to use vxlan (
https://github.com/GoogleCloudPlatform/kubernetes/blob/cbedf9f4702bbc3ebde1c68c2c4f7f6b050f61b4/cluster/rackspace/templates/salt-minion.sh#L41)
but I can't say I understand all the details.

Both of these are probably replacable with rudder. We'd want to
coordinate/confirm with those guys.

Reply to this email directly or view it on GitHub
#1059 (comment)
.

@eyakubovich
Copy link

There's a PR in Rudder that adds an allocation mode (should get merged soon). It adds the ability to allocate a subnet but it doesn't do anything to the data packets. It just writes the allocation to /run/rudder/subnet.env and signals systemd that it has started. Another dependency could wait on that and issue the GCE command to setup its advanced routing. I would also be happy to create a GCE mode that would do that inside Rudder.

The work we're doing on getting VXLAN to work should make it very similar to the Rackspace approach (although they're using multicast and we're avoiding it).

As for Azure and OpenVPN mesh, I am not sure if they're using OpenVPN just for the overlay or because they also want encryption. We're not working on anything that will support encryption for now in Rudder.

@jbeda
Copy link
Contributor

jbeda commented Sep 13, 2014

I'm going to fork this topic into #1307. Wrt Digital Ocean, I think that Rudder is a fine choice to get things up and running as is. As things like VxLAN are added it'll only get better.

@pchico83
Copy link
Author

Sounds good. I will play with SaltStack for Rudder installation then to make it more reusable.

@brendandburns
Copy link
Contributor

Closing this as I believe we have decent Digital Ocean support now.

vishh added a commit to vishh/kubernetes that referenced this issue Apr 6, 2016
deads2k pushed a commit to deads2k/kubernetes that referenced this issue Nov 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants