LLMpediaThe first transparent, open encyclopedia generated by LLMs

DaemonSet

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Prometheus Operator Hop 5
Expansion Funnel Raw 75 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted75
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
DaemonSet
NameDaemonSet
TypeKubernetes workload
Introduced2014
DeveloperGoogle; Cloud Native Computing Foundation
Written inGo
LicenseApache License 2.0

DaemonSet

DaemonSet is a Kubernetes workload primitive that ensures a copy of a specific Pod runs on selected nodes within a cluster. It is used to deploy system-level agents such as log collectors, monitoring agents, and networking components across nodes managed by projects like Kubernetes and organizations such as the Cloud Native Computing Foundation and Google. Operators from enterprises like Netflix, Airbnb, Pinterest, and cloud providers including Amazon Web Services, Microsoft Azure, and Google Cloud Platform commonly rely on DaemonSet-like patterns when integrating with tools from vendors like Prometheus, Fluentd, Istio, and Envoy.

Overview

A DaemonSet schedules Pods so that one Pod copy runs on every node or a defined subset of nodes determined by selectors and affinity. It complements other Kubernetes primitives like ReplicaSet, Deployment, StatefulSet, Job, and CronJob to cover node-level service deployment needs. Typical DaemonSet workloads include logging agents from projects such as Fluent Bit and Filebeat, monitoring agents like Node Exporter (part of the Prometheus ecosystem), networking daemons from Calico or Cilium, and service mesh sidecars from Istio or Linkerd. Enterprise adopters such as Red Hat, VMware, IBM and research groups at Stanford University and MIT publish operational patterns that leverage DaemonSets for observability, security, and network policy enforcement.

Design and behavior

DaemonSet behavior is governed by controllers within the kube-controller-manager component of Kubernetes and interacts with core objects such as Pod, Node, ServiceAccount, and ConfigMap. By default a DaemonSet creates one Pod per eligible node; eligibility can be refined using node selectors, node affinity, taints and tolerations, and label-based matching. DaemonSet rollout semantics intersect with the Kubernetes API machinery, watch events, and informers used by projects like etcd and components such as kubelet. It supports update strategies analogous to those used by Deployment objects, allowing rolling updates while coordinating with lifecycle hooks and readiness probes. DaemonSets integrate with cluster autoscalers from vendors like Cluster Autoscaler and platforms such as Google Kubernetes Engine and Amazon EKS which influence when DaemonSet Pods are scheduled or evicted.

Configuration and fields

A DaemonSet manifest is defined in YAML or JSON via the Kubernetes API under apiVersion apps/v1 and includes fields such as metadata, spec, and status. Key configuration fields include spec.selector, spec.template (holding the PodTemplateSpec), spec.updateStrategy, and nodeSelector labels. Spec.updateStrategy.type supports RollingUpdate with maxUnavailable semantics and OnDelete for manual control. Additional fields reference volumes, containers, securityContext, and serviceAccountName which tie to objects like Secret and ConfigMap. Node-level control uses annotations and labels consistent with conventions from distributions such as OpenShift and Rancher. Operators may use admission controllers like OPA Gatekeeper or Kyverno to enforce policies on DaemonSet manifests for compliance and governance in enterprises including Goldman Sachs or Capital One.

Use cases and examples

DaemonSets are commonly used for: - Observability agents: deploy Prometheus exporters like Node Exporter, log shippers such as Fluentd or Fluent Bit, and tracing collectors from Jaeger or Zipkin. - Networking and CNI: deploy CNI components from Calico, Cilium, or Weave Net to every node. - Storage and local daemons: run local storage helpers or CSI drivers used by projects like Rook or Longhorn. - Security and compliance: host runtime security agents from vendors like Aqua Security, Palo Alto Networks, or open-source projects such as Falco.

Example patterns appear in case studies by Spotify, Dropbox, Goldman Sachs, and research at UC Berkeley showing how DaemonSets scale across clusters and integrate with CI/CD platforms like Jenkins and GitLab CI/CD.

Management and lifecycle

Lifecycle operations for DaemonSets are managed via kubectl, client libraries in languages like Go and Python, and infrastructure tools such as Terraform and Helm. Common commands include create, apply, delete, and rollout status; operators often combine these with observability from Prometheus and logging to ELK Stack components. Upgrading a DaemonSet uses RollingUpdate semantics where the kube-controller-manager orchestrates Pod deletion and recreation honoring podDisruptionBudgets and readiness checks. For large clusters, strategies used by teams at Google, Facebook, and LinkedIn include staged rollouts, canary nodes, and drain operations using kubectl drain that interact with cluster autoscaling and scheduler behavior from projects like Kubernetes Scheduler.

Limitations and considerations

DaemonSets run one Pod per node which can conflict with resource constraints on small nodes or tainted nodes managed by platforms like Azure Kubernetes Service. They are not suited for regular horizontally scalable application workloads that require dynamic scaling like ReplicaSet or HorizontalPodAutoscaler. Scheduling DaemonSet Pods onto GPU nodes or specialized nodes requires careful use of nodeAffinity and tolerations, often informed by best practices from NVIDIA GPU operator deployments. Security considerations include least-privilege service accounts, restricting hostPath mounts, and minimizing privileged containers to reduce attack surface—guidance echoed by auditors at CIS and standards from NIST. Operational complexity grows in multi-tenant clusters used by organizations such as Uber and Lyft, where admission control, network policies, and RBAC from Kubernetes RBAC must be coordinated.

Category:Kubernetes resources