Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide pod env vars when managing master and slaves apps in a single service #1312

Closed
Gurpartap opened this issue Sep 13, 2014 · 2 comments
Closed

Comments

@Gurpartap
Copy link
Contributor

I'm trying to serve redis master and its slaves through a single service. I have successfully managed to create such a service by leveraging labels and replicaSelector, with all nodes proxying to master pod.

$ cat redis-master-pod.json # snipped
...
"desiredState": {
  "manifest": {
    "containers": [{ "ports": [{ "containerPort": 6379 }] }]
    ...
"labels": { "name": "redis", "role": "master" }
$ cat redis-slave-controller.json
...
"desiredState": {
  "replicaSelector": { "name": "redis", "role": "slave" },
  "podTemplate": {
    "desiredState": { 
       ...  "command": ["redis-server", "/etc/redis/redis.conf", "--slaveof", "$ADDR", "$PORT"] }, # see #1309
    "labels": { "name": "redis", "role": "slave" }

However, the slaves will need master's address:port information through environment variables[1], which are only defined for services.

I find this idea of encapsulating master/slave cluster into a single service clean from a higher level. But, I'm not sure if this suits the intended design of pods, replicationControllers and services.

OTOH, providing variables about a pod will overlap service's purpose of providing the same.

Also, this means that slaves are externally inaccessible. Am I wasting time with this?

Thanks!

[1] Related: #1309. I'm currently hard coding the master address:port value for slaves.

-- Gurpartap Singh

@thockin
Copy link
Member

thockin commented Sep 14, 2014

You're trying to do two things in a single service. That doesn't work.
Service discovery is on a service granularity. If your slaves need to find
your master, this suggests they should not be the same Service.

On Sat, Sep 13, 2014 at 4:52 AM, Gurpartap Singh notifications@github.com
wrote:

I'm trying to serve redis master and its slaves through a single service.
I have successfully managed to create such a service by leveraging labels
and replicaSelector.

$ cat redis-master-pod.json # snipped
...
"desiredState": {
"manifest": {
"containers": [{ "ports": [{ "containerPort": 6379 }] }]
...
"labels": { "name": "redis", "role": "master" }

$ cat redis-slave-controller.json
...
"desiredState": {
"replicaSelector": { "name": "redis", "role": "slave" },
"podTemplate": {
"desiredState": {
... "command": ["redis-server", "/etc/redis/redis.conf", "--slaveof", "$ADDR", "$PORT"] },
"labels": { "name": "redis", "role": "slave" }

However, the slaves will need master's address:port information through
environment variables[1], which are only defined for services.

I find this idea of encapsulating master/slave cluster into a single
service clean from a higher level. But, I'm not sure if this goes beyond
the intended design of pods, replicationControllers and services.

OTOH, providing variables about a pod will overlap service's purpose of
providing the same.

Am I wasting time with this?

[1] Related: #1309
#1309. I'm
currently hard coding the master address:port value for slaves.

Reply to this email directly or view it on GitHub
#1312.

@jbeda
Copy link
Contributor

jbeda commented Sep 17, 2014

@Gurpartap The idea of creating "composite" higher level manageable "objects" from is probably at a layer above Kubernetes.

So -- Kubernetes knows how to launch a replicated set of pods and provide network access to them, but it doesn't know how to create a "service object" that consists of 2 service definitions (one to the master, one to slaves) and 2 replicated sets of pods (singleton master, replicated read slaves). That is left to a logically higher system.

We generally call this problem "config" and there are proposals in #1007. @bgrant0607 is driving some of that discussion. See also #1325 as the new kubecfg has hints of this too.

@jbeda jbeda closed this as completed Sep 17, 2014
tkashem pushed a commit to tkashem/kubernetes that referenced this issue Feb 23, 2023
CNF-5901: admission hook change for workload partition on all clusters
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants