Kubernetes-based, scale-to-zero, request-driven compute
-
Updated
Dec 24, 2024 - Go
Kubernetes-based, scale-to-zero, request-driven compute
Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports LLMs, embeddings, and speech-to-text.
OpenSearch Kubernetes Operator
Horizontal Pod Autoscaler built with predictive abilities using statistical models
Custom Pod Autoscaler program and base images, allows creation of Custom Pod Autoscalers
A Kubernetes controller for automatically optimizing pod requests based on their continuous usage. VPA alternative that can work with HPA.
Automatically scale LXC containers resources on Proxmox hosts
Autoscale Docker Swarm services based on cpu utilization.
Another Autoscaler is a Kubernetes controller that automatically starts, stops, or restarts pods from a deployment at a specified time using a cron expression.
Kubernetes pod autoscaler based on queue size in AWS SQS
Dynamically scale kubernetes resources using the length of an AMQP queue (number of messages available for retrieval from the queue) to determine the load
Jenkins autoscaler that scales VMs based on executors usage
Operator for managing Kubernetes Custom Pod Autoscalers (CPA).
Kubernetes operator that prescales cluster nodes to ensure a cronjobs start exactly on time
HireFire integration library for Ruby applications
Terraform module to autoscale ECS Service based on CloudWatch metrics
Dynamically scale kubernetes resources using length of an AMQP queue
Add a description, image, and links to the autoscaler topic page so that developers can more easily learn about it.
To associate your repository with the autoscaler topic, visit your repo's landing page and select "manage topics."