Skip to content
This repository has been archived by the owner on Oct 3, 2020. It is now read-only.

Fix UnboundLocalError #85

Closed
wants to merge 1 commit into from
Closed

Conversation

vishesh92
Copy link

Traceback (most recent call last):
  File "/kube_downscaler/scaler.py", line 122, in autoscale_resource
    is_uptime,
UnboundLocalError: local variable 'is_uptime' referenced before assignment

Fix UnboundLocalError: local variable 'is_uptime' referenced before assignment
@coveralls
Copy link

Coverage Status

Coverage increased (+0.02%) to 95.0% when pulling 11eca2e on vishesh92:patch-1 into abecab5 on hjacobs:master.

@hjacobs
Copy link
Owner

hjacobs commented Jan 29, 2020

In which situation do you see this error? Maybe when periods overlap? Please paste some logs from before the stacktrace.

@vishesh92
Copy link
Author

vishesh92 commented Jan 30, 2020

I am using version 20.1.0. I added this annotation to my deployment: downscaler/downscale-period: Mon-Sun 13:30-14:25 Asia/Kolkata. My deployment's replicas went to 0 but it didn't come back up and I saw this error in logs.

2020-01-30 09:03:05,239 DEBUG: https://172.20.0.1:443 "GET /api/v1/namespaces/my-namespace HTTP/1.1" 200 414
2020-01-30 09:03:05,239 ERROR: Failed to process Deployment my-namespace/nginx-deployment : local variable 'is_uptime' referenced before assignment
Traceback (most recent call last):
  File "/kube_downscaler/scaler.py", line 122, in autoscale_resource
    is_uptime,
UnboundLocalError: local variable 'is_uptime' referenced before assignment

@hjacobs
Copy link
Owner

hjacobs commented Jan 31, 2020

Can you add a test case which fails before the fix and succeeds afterwards?

@hjacobs
Copy link
Owner

hjacobs commented Jan 31, 2020

Note that your deployment will not scale up if you only have "downscale-period" (see README).

@rdbraber
Copy link

rdbraber commented Feb 7, 2020

I'm facing the same issue, and it's quite simple to test. I've created the following deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: downscaler
  labels:
    application: ds-test-scaledown-always
    version: v20.1.0
  name: ds-test-scaledown-always
spec:
  replicas: 1
  selector:
    matchLabels:
      application: ds-test-scaledown-always
  template:
    metadata:
      labels:
        application: ds-test-scaledown-always
        version: v20.1.0
    spec:
      serviceAccountName: downscaler
      containers:
      - name: downscaler
        # see https://github.com/hjacobs/downscaler/releases
        image: hjacobs/kube-downscaler:20.1.0
        args:
          # run every minute
          - --interval=60
          # only one namespace
          - --namespace=ds-test
          # include resources
          - --include-resources=deployments,statefulsets
          # Scale down every night between 19:00 and 20:00
          - --downscale-period=Mon-Sun 19:00-20:00 Europe/Amsterdam
          # number of replicas to scale to
          - --downtime-replicas=0
          # We show no mercy, not even for newly created deployments
          - --grace-period=0
          - --debug
        resources:
          limits:
            memory: 100Mi
          requests:
            cpu: 5m
            memory: 50Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000

When checking the logfile at a time, which is outside the range as specified by the downscale-period, it shows the following:

2020-02-07 09:02:13,968 INFO: Downscaler v20.1.0 started with debug=True, default_downtime=never, default_uptime=always, downscale_period=Mon-Sun 19:00-20:00 Europe/Amsterdam, downtime_replicas=0, dry_run=False, exclude_cronjobs=, exclude_deployments=kube-downscaler,downscaler, exclude_namespaces=kube-system, exclude_statefulsets=, grace_period=0, include_resources=deployments,statefulsets, interval=60, namespace=ds-test, once=False, upscale_period=never
2020-02-07 09:02:13,972 DEBUG: Starting new HTTPS connection (1): 172.20.0.1
2020-02-07 09:02:14,007 DEBUG: https://172.20.0.1:443 "GET /api/v1/namespaces/ds-test/pods HTTP/1.1" 200 None
2020-02-07 09:02:14,016 DEBUG: https://172.20.0.1:443 "GET /apis/apps/v1/namespaces/ds-test/deployments HTTP/1.1" 200 1675
2020-02-07 09:02:14,023 DEBUG: https://172.20.0.1:443 "GET /api/v1/namespaces/ds-test HTTP/1.1" 200 291
2020-02-07 09:02:14,048 ERROR: Failed to process Deployment ds-test/nginx : local variable 'is_uptime' referenced before assignment
Traceback (most recent call last):
  File "/kube_downscaler/scaler.py", line 122, in autoscale_resource
    is_uptime,
UnboundLocalError: local variable 'is_uptime' referenced before assignment
2020-02-07 09:02:14,066 DEBUG: https://172.20.0.1:443 "GET /apis/apps/v1/namespaces/ds-test/statefulsets HTTP/1.1" 200 162

hjacobs added a commit that referenced this pull request Feb 7, 2020
@hjacobs hjacobs mentioned this pull request Feb 7, 2020
hjacobs added a commit that referenced this pull request Feb 7, 2020
@hjacobs
Copy link
Owner

hjacobs commented Feb 7, 2020

Fixed in #86

@hjacobs hjacobs closed this Feb 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants