Generated by GPT-5-mini| Linux bridge | |
|---|---|
| Name | Linux bridge |
| Developer | Linus Torvalds / The Linux Foundation |
| First release | 2.2 (kernel series) |
| Operating system | Linux kernel |
| License | GNU General Public License |
Linux bridge is a software implementation of a network bridge integrated into the Linux kernel that forwards Ethernet frames between network interfaces at Layer 2. It provides transparent switching functionality used in virtualized environments, container orchestration, and data center networking, interoperating with tools from projects such as QEMU, KVM, Docker, and OpenStack. Originating from early bridging code merged during the 2.2 kernel era, its development involves collaborators associated with Netfilter and the IETF standards for bridging protocols.
The Linux bridge implements IEEE 802.1D/802.1Q switching behavior within the kernel, offering MAC address learning, forwarding, and VLAN handling. It is embedded in the Linux kernel networking stack and interacts with subsystems such as iptables/nftables, systemd-networkd, and NetworkManager. Administrators commonly manage bridges via userspace utilities like iproute2, brctl from bridge-utils, and the modern bridge command in iproute2 maintained by the Netdev community. The bridge is widely used in virtualization stacks provided by Xen Project, Proxmox VE, and oVirt.
At kernel level, the bridge is implemented as a device type that hooks into the netfilter and Traffic Control (tc) subsystems, enabling packet filtering and shaping. Core components include the bridge forwarding database (FDB) for MAC learning, bridge ports representing physical or virtual network interfaces, and VLAN filtering that enforces 802.1Q tag handling. The datapath interacts with kernel modules and features such as SR-IOV passthrough, eBPF for programmable packet processing, and ethtool for link settings. Control plane responsibilities are handled by userspace daemons or management tools such as systemd, NetworkManager, and orchestration platforms like Kubernetes using CNI plugins.
Administrators typically create and configure bridges using the iproute2 suite (the "bridge" subcommand) or legacy bridge-utils commands. Configuration can be declared in distribution-specific network files managed by systemd-networkd, Netplan, or ifupdown, or by orchestration stacks such as OpenStack Neutron and Kubernetes CNI plugins like Calico, Flannel, and Weave Net that programmatically manipulate bridge devices. Management tasks include adding/removing bridge ports, setting Spanning Tree Protocol parameters to interoperate with Rapid Spanning Tree Protocol, enabling VLAN-aware bridging, and integrating with Linux bonding for link aggregation with standards such as IEEE 802.3ad (LACP).
The bridge supports MAC learning, unicast flooding suppression, and aging timers consistent with IEEE 802.1D. VLAN-aware mode enforces 802.1Q tag processing and VLAN filtering, enabling per-VLAN forwarding decisions and integration with 802.1X authentication in wired deployments. Spanning Tree Protocol implementations ensure loop prevention when connecting multiple bridges and can interoperate with network devices from vendors like Cisco Systems, Juniper Networks, and Arista Networks. The bridge integrates with ebtables for Ethernet-frame filtering and with nftables/iptables for higher-layer filtering, and supports multicast snooping to reduce broadcast traffic with counterparts such as IGMP and MLD agents. Advanced features include hairpin mode for container networking, hairpin forwarding interactions with ovs-vsctl when coexisting with Open vSwitch, and offloading capabilities working alongside ethtool-reported features.
Common deployments include virtual machine networking in KVM/QEMU guests, container networking for Docker and Podman workloads, and tenant isolation in cloud platforms like OpenStack and public clouds leveraging Linux-based hypervisors. In edge and enterprise environments, bridges connect physical switches, allow nested virtualization, and serve test lab topologies for projects such as Mininet and GNS3. Service providers and research networks often combine bridging with VXLAN and GRE tunnels implemented by kernel modules to create overlay networks for multitenant isolation, interoperating with control-plane systems like BGP route reflectors and EVPN-capable controllers.
Performance depends on kernel version, network interface drivers, and offload support such as checksum offload and TSO, observable via tools like perf and pktgen. For high-throughput scenarios, technologies like SR-IOV, DPDK, and hardware NIC features can reduce CPU overhead, while zero-copy mechanisms in vhost-user and VFIO bypass bridging for faster paths. Security considerations include integration with netfilter rulesets to prevent L2 attacks (MAC spoofing, ARP spoofing), use of port isolation and VLAN segmentation to enforce tenant separation, and runtime controls via seccomp and SELinux when exposing management APIs. Enabling Spanning Tree, BPDU filtering, and implementing authentication with 802.1X reduce risks in multi-switch topologies; auditing and monitoring with tools like tcptrack and syslog assist operational security.
Category:Linux networking