Skip to content

a new DeploymentConfig with replicas=1 creates a ReplicationController with replicas=0 #9216

Closed
@jstrachan

Description

How do I create a DeploymentConfig with replicas=1 so that it actually creates a ReplicationController with replicas > 0?

This one had me confused for a while; I figured OpenShift was broken ;)

Version
# oc version
oc v1.3.0-alpha.1
kubernetes v1.3.0-alpha.1-331-g0522e63
Steps To Reproduce

Here's the YAML I'm using to create a DC

kind: DeploymentConfig
apiVersion: v1
metadata:
  name: funkything
  namespace: default-staging
  selfLink: /oapi/v1/namespaces/default-staging/deploymentconfigs/funkything
  uid: d4f6f9f4-2d48-11e6-9cc4-080027b5c2f4
  resourceVersion: '1616'
  generation: 2
  creationTimestamp: '2016-06-08T07:15:51Z'
  labels:
    group: io.fabric8.funktion.quickstart
    project: funkything
    provider: fabric8
    version: 1.0.3
  annotations:
    fabric8.io/build-url: 'http://jenkins.vagrant.f8/job/funky1/3'
    fabric8.io/git-branch: funky1-1.0.3
    fabric8.io/git-commit: 317304e59ce4fcac045c0b47ed5613196e36748d
    fabric8.io/git-url: >-
      http://gogs.vagrant.f8/gogsadmin/funky1/commit/317304e59ce4fcac045c0b47ed5613196e36748d
    fabric8.io/iconUrl: img/icons/funktion.png
spec:
  strategy:
    type: Rolling
    rollingParams:
      updatePeriodSeconds: 1
      intervalSeconds: 1
      timeoutSeconds: 600
      maxUnavailable: 25%
      maxSurge: 25%
    resources: {}
  triggers:
    - type: ConfigChange
  replicas: 1
  test: false
  selector:
    group: io.fabric8.funktion.quickstart
    project: funkything
    provider: fabric8
  template:
    metadata:
      creationTimestamp: null
      labels:
        group: io.fabric8.funktion.quickstart
        project: funkything
        provider: fabric8
        version: 1.0.3
    spec:
      containers:
        - name: quickstart-funkything
          image: 'quickstart/funkything:1.0.3'
          ports:
            - containerPort: 8080
              protocol: TCP
          env:
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          resources: {}
          livenessProbe:
            httpGet:
              path: /health
              port: 8081
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 8081
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
status:
  latestVersion: 1
  details:
    causes:
      - type: ConfigChange
  observedGeneration: 2
Current Result

Here's the DC

$ oc get dc
NAME         REVISION   REPLICAS   TRIGGERED BY
funkything   1          1          config
$ oc get rc
NAME           DESIRED   CURRENT   AGE
funkything-1   0         0         11m
Expected Result
$ oc get dc
NAME         REVISION   REPLICAS   TRIGGERED BY
funkything   1          1          config
$ oc get rc
NAME           DESIRED   CURRENT   AGE
funkything-1   1         1        11m
Additional Information

I don't see any warnings/errors/events in openshift itself, the DC, RC or deploy pod to indicate why its not deciding to scale up the RC.

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions