LLMpediaThe first transparent, open encyclopedia generated by LLMs

Corosync

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 64 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted64
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Corosync
NameCorosync
DeveloperLinux Foundation; originally by ClusterLabs contributors and the Open Source community
Initial release2004
Programming languageC (programming language)
Operating systemLinux
LicenseGNU General Public License

Corosync is an open-source clustering and high-availability messaging project designed to provide reliable group communication, membership, and quorum services for distributed systems. It is used as a foundational component in cluster stacks, enabling coordination among nodes for projects ranging from database replication to telecommunications and virtualization. Corosync integrates with a variety of orchestration and resource-management tools to deliver fault-tolerant services for enterprises and research deployments.

Overview

Corosync implements core cluster services including reliable multicast messaging, membership notification, and quorum calculation, enabling projects such as Pacemaker (software) clusters, DRBD, and Kubernetes-adjacent high-availability layers to coordinate. Its design emphasizes low-latency, deterministic delivery suitable for workloads found in Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and cloud platforms like Amazon Web Services where consistent state across nodes is essential. Corosync draws architectural inspiration from academic work and production systems like Isis Toolkit, Spread Toolkit, and TIBCO Rendezvous while targeting the Linux and POSIX ecosystem.

Architecture and Components

At the core, Corosync provides a Cluster Engine that exposes APIs for messaging and membership; common components include the Cluster Membership Service, Totem Single-ring Ordering and Membership Protocol, and a Quorum subsystem. Corosync's Totem protocol is related in concept to protocols used by z/OS middleware and research implementations at institutions such as Cornell University and University of California, Berkeley. Typical deployments pair Corosync with cluster resource managers like Pacemaker (software) and storage replication like DRBD to coordinate fencing and resource allocation across nodes such as those running Proxmox VE or oVirt.

Key components: - The Membership Layer implements failure detection and view change similar to mechanisms in Hadoop's Zookeeper and etcd's raft-based systems, though Corosync uses a ring-based ordering with Totem origins. - The Messaging Layer supports reliable and ordered multicasts comparable to services in Apache Kafka for different use cases. - The Quorum subsystem integrates with fencing agents (STONITH drivers) as used by STONITH implementations and hardware vendors like Dell EMC and Hewlett Packard Enterprise.

Features and Functionality

Corosync provides ordered reliable multicast, membership and quorum notifications, and a Distributed Configuration Store used by higher-level components. These features enable service continuity in scenarios involving software like MySQL, PostgreSQL, MariaDB, and replication managers such as Galera Cluster. Corosync supports token-passing and ring-based ordering which make it suitable for real-time systems found in Telecommunications Industry stacks and network function virtualization platforms that integrate with OpenStack components.

Functional highlights: - Reliable Ordered Messaging for failover coordination in clusters alongside orchestration tools like Ansible, SaltStack, and Chef. - Cluster Membership events permit integration with service discovery systems used in Consul and Zookeeper deployments. - Quorum management supports split-brain avoidance patterns common in storage clusters underpinning Ceph and GlusterFS.

Configuration and Administration

Administrators configure Corosync with a text-based configuration (typically corosync.conf) which describes transport bindings, node lists, and quorum settings; common management tasks are performed on distributions such as Debian, Ubuntu, CentOS, and Fedora. Integration with system managers like systemd and init scripts from SysVinit distributions provides lifecycle control, while logging and monitoring tie into observability stacks built on Prometheus, Grafana, and ELK Stack components.

Operational considerations: - Network configuration involves binding to specific interfaces and tuning parameters reminiscent of network tuning in Linux Kernel and Netfilter. - High-availability deployments often combine Corosync with fencing devices and power management offered by vendors like APC and Schneider Electric. - Backup and recovery practices align with database and virtualization procedures used for KVM guests and LXC containers.

Use Cases and Integrations

Corosync is deployed in enterprise high-availability clusters for databases, file systems, virtualization, and telecom systems. Real-world integrations include pairing with Pacemaker (software) for resource orchestration, using DRBD for block-level replication, and serving as the messaging substrate for cluster-aware applications in Proxmox VE and Red Hat cluster offerings. Telecom carriers and research labs combine Corosync with software from OpenDaylight and ONAP for carrier-grade control-plane redundancy.

Representative use cases: - Active/passive and active/active database failover with MySQL and PostgreSQL. - Shared-storage fencing for SAN environments using SCSI-based fencing agents and hardware from Hewlett Packard Enterprise. - Virtual machine HA in platforms like oVirt and Proxmox VE.

Development and History

Corosync began as a community-driven effort in the early 2000s, evolving from cluster messaging research and commercial projects such as Isis Toolkit and the commercial lineage of Tibco middleware. Over time, stewardship shifted toward open-source governance with contributions from projects and organizations including ClusterLabs, the Linux Foundation, and major distributions like Red Hat and SUSE. The project roadmap has been influenced by advances in consensus protocols exemplified by Raft and systems like etcd, even as Corosync preserves its ring-based Totem heritage.

Notable milestones include adoption in Linux distribution cluster stacks, widespread use in virtualization platforms, and continued integration with cloud-native orchestration efforts such as OpenStack and services in Amazon Web Services and Google Cloud Platform. Development continues in public repositories with contributors from academic, enterprise, and independent communities collaborating to maintain reliability for critical infrastructure.

Category:Open source software