Generated by GPT-5-mini| KEDA | |
|---|---|
| Name | KEDA |
| Developer | CNCF, Microsoft, Red Hat |
| Initial release | 2019 |
| Repository | github.com/kedacore/keda |
| License | Apache-2.0 |
| Programming language | Go |
| Operating system | Linux, macOS |
| Platform | Kubernetes |
KEDA KEDA is a Kubernetes-based event-driven autoscaler designed to enable containerized workloads to scale in response to external event sources. It integrates with Kubernetes, Prometheus, Azure Functions, AWS Lambda, Google Cloud Pub/Sub, and many message systems to provide fine-grained scaling for pods and jobs. Developed with contributions from Microsoft, Red Hat, and the Cloud Native Computing Foundation, KEDA is used across cloud providers and on-premises clusters.
KEDA acts as a bridge between event sources such as Apache Kafka, RabbitMQ, Azure Service Bus, Amazon SQS, and scaling primitives in Kubernetes Horizontal Pod Autoscaler, enabling applications built for Docker containers, Knative, and OpenShift to scale based on demand. It exposes metrics via the Kubernetes metrics API consumed by the Horizontal Pod Autoscaler and integrates with telemetry systems like Grafana, Prometheus, Elastic Stack, and Datadog. Contributors include individuals and organizations active in cloud-native ecosystems such as CNCF, Microsoft Azure, Red Hat OpenShift, and independent maintainers from GitHub.
KEDA’s control-plane components run as Kubernetes controllers and manage scaling behavior through custom resources and a metrics adapter. Core components interact with cluster APIs such as the Kubernetes API, etcd, and controller-runtime libraries used in projects like Kubebuilder and Operator Framework. The runtime leverages Go libraries common to projects like Prometheus Operator and integrates with service meshes such as Istio and Linkerd where sidecar proxies are present. For cloud-native deployments, KEDA is commonly paired with tools like Helm, Flux, and Argo CD for lifecycle and GitOps management.
KEDA exposes custom resources including ScaledObject and ScaledJob to declare event-driven scaling behavior. A ScaledObject ties a deployment, replica set, or replica controller to one or more scalers (e.g., Azure Queue Storage, AWS Kinesis, Google Cloud Pub/Sub), while a ScaledJob provisions Kubernetes Jobs in response to events from sources such as Apache ActiveMQ or NATS. These CRDs are reconciled by controllers similar to those used in Operator Framework and follow API patterns compatible with CustomResourceDefinitions and admission controllers in Kubernetes API Server ecosystems.
KEDA supports a large catalog of scalers that include cloud, message, and database systems: Azure Event Hubs, Azure Service Bus, Azure Storage Queues, Amazon SQS, Amazon Kinesis, Google Cloud Pub/Sub, Apache Kafka, RabbitMQ, NATS, Redis, MongoDB, and SQL databases such as PostgreSQL and MySQL. Community and vendor contributions add scalers for systems like Salesforce, SAP, Prometheus Alertmanager, GitHub Actions, and Apache Pulsar. The extensible scaler interface allows custom implementations in Go or via external metrics adapters compatible with Kubernetes Metrics Server and Prometheus Adapter.
KEDA is installed via manifests or package managers such as Helm and can be deployed on distributions including AKS, EKS, GKE, OpenShift, and self-hosted kubeadm clusters. Configuration typically involves granting RBAC roles and creating namespaces; interactions reference APIs like Role-Based Access Control and controller patterns used by Cert-Manager and Flux. For cloud integrations, operators often provision service principals or IAM roles in Azure Active Directory, AWS Identity and Access Management, or Google Cloud IAM and configure secrets via Kubernetes Secrets or tools like HashiCorp Vault.
Common use cases include event-driven microservices consuming messages from Apache Kafka and Amazon SQS, serverless-style batch processing with Azure Functions and AWS Lambda adapters, and cost optimization for bursty workloads running on OpenShift or AKS. Enterprise adopters include teams at cloud vendors and platform providers that integrate KEDA into CI/CD pipelines using GitHub Actions, Jenkins, or Tekton. It is cited in architectures alongside projects like Knative, Dapr, and Kubeless to provide event-driven scaling in hybrid and multi-cloud deployments.
KEDA requires permissions to read custom resources and interact with the Kubernetes metrics API; installation configures RBAC roles and service accounts consistent with practices used by Istio, Prometheus Operator, and Cert-Manager. For cloud provider integrations, administrators set up credentials via Azure Managed Identity, AWS IAM Roles for Service Accounts, or Google Workload Identity to limit surface area. Security hardening recommendations align with guidance from CNCF and NIST for cloud-native workloads and echo patterns used by OWASP for secret management and least-privilege access.