-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for deploying k8s cluster into Digital Ocean #1059
Comments
A PR would be welcome. |
Sounds good. I am considering to create a docker image and submit the Dockerfile in the PR in order to avoid the installation of the required packages in the local computer. Then, the RUN command would receive the (non-default) configuration as environment variables. |
You mean Dockerfiles for the various k8s components? These docker files already largely exist in the ./build/ directory. We're working toward containerizing all of the k8s components. |
No, I mean a container that just launches the k8s cluster in Digital Ocean. I see the following advantages:
On the other hand, containerizing k8s components sounds very attractive to me, but I think it is independent from using a container that setups the k8s cluster. |
I'd be willing to help test... |
@bgrant0607 now k8s uses IP-per-Pod, I guess the only option for Digital Ocean support is: am I right? |
Have you looked at rudder from coreos? I think it should work too. Brendan
|
Is there any advantage of rudder over ovs or viceversa? Rudder is a little more documented. |
Honestly, @pchico83, a lot of this stuff is so new and still developing that we I'm not sure how to weight on solution over another. Rudder should be easier to set up, I think but OVS might be faster. At some point Rudder may automate the set up of OVS/VxLAN where appropriate. I know that is something they are looking at. /cc @eyakubovich if he wants to weigh in. (To add to this, there is weave in the mix now too. I haven't played with weave and I don't think it is compatible with k8s out of the box as it wraps docker so it can mess with network namespaces.) |
Thanks @jbeda. I will try Rudder first as it is proposed in https://github.com/bketelsen/coreos-kubernetes-digitalocean. I also want to play with the cluster setup because in the longer term I would like to implement a web project to deploy kubernetes clusters in different providers supporting different options. |
Also check out #1153 as we are looking to untangle/rewrite the mess of shell scripts. That one is to rewrite kube-up in go. |
@pchico83 Rudder currently uses userspace encapsulation but we're actively looking into using VXLAN. If you're mostly playing around, the overhead of userspace should not be prohibitive. By the time this setup is production ready, we should have a kernel based data path. If you need help setting it up, feel free to contact us on IRC (#coreos on freenode) or mailing list (https://groups.google.com/forum/#!forum/coreos-dev). |
@jbeda Probably this is not the right moment, but should we consider to use rudder (or any other general solution) for all cloud provider implementations? This way, the cluster setup process would be more standard across providers. Ideally, I think that only the VMs provisioning process should be provider specific. |
/cc @thockin I'd be in favor of moving to settle on one "network normalization" layer that handles the various techniques. Rudder isn't there yet but could be. Specifically, for GCE we really want to use the GCE advanced routing features as it is faster and works from all hosts (even those not configured to be part of the k8s cluster) on the network transparently. Azure does an openvpn mesh (https://github.com/GoogleCloudPlatform/kubernetes/blob/732b7ce7ef975704a6e106a1e45066f2eca1adcb/cluster/saltbase/salt/top.sls#L30) and rackspace looks to use vxlan (https://github.com/GoogleCloudPlatform/kubernetes/blob/cbedf9f4702bbc3ebde1c68c2c4f7f6b050f61b4/cluster/rackspace/templates/salt-minion.sh#L41) but I can't say I understand all the details. Both of these are probably replacable with rudder. We'd want to coordinate/confirm with those guys. |
I'd be OK to standardize on a stack, if someone is up to the task of On Fri, Sep 12, 2014 at 4:21 PM, Joe Beda notifications@github.com wrote:
|
There's a PR in Rudder that adds an allocation mode (should get merged soon). It adds the ability to allocate a subnet but it doesn't do anything to the data packets. It just writes the allocation to /run/rudder/subnet.env and signals systemd that it has started. Another dependency could wait on that and issue the GCE command to setup its advanced routing. I would also be happy to create a GCE mode that would do that inside Rudder. The work we're doing on getting VXLAN to work should make it very similar to the Rackspace approach (although they're using multicast and we're avoiding it). As for Azure and OpenVPN mesh, I am not sure if they're using OpenVPN just for the overlay or because they also want encryption. We're not working on anything that will support encryption for now in Rudder. |
I'm going to fork this topic into #1307. Wrt Digital Ocean, I think that Rudder is a fine choice to get things up and running as is. As things like VxLAN are added it'll only get better. |
Sounds good. I will play with SaltStack for Rudder installation then to make it more reusable. |
Closing this as I believe we have decent Digital Ocean support now. |
Bug 2023779: Fix patch 104847
Due to lower price of droplets against other cloud providers, I guess Digital Ocean support would be interesting to quickly setup k8s clusters for testing, demo and play around.
I can take care of the PR since I am already playing with k8s in Digital Ocean.
The text was updated successfully, but these errors were encountered: