Generated by GPT-5-mini| KVM (kernel module) | |
|---|---|
| Name | KVM (kernel module) |
| Developer | Linus Torvalds, Red Hat, QEMU, Intel Corporation, Advanced Micro Devices |
| Initial release | 2007 |
| Repository | Linux kernel |
| Written in | C (programming language) |
| Operating system | Linux kernel |
| License | GNU General Public License |
KVM (kernel module) KVM is a Linux kernel module that provides hardware-assisted virtualization by exposing processor virtualization extensions to user-space hypervisors. It integrates with the Linux kernel maintained by Linus Torvalds and is used by projects and organizations such as Red Hat, Canonical (company), SUSE, Amazon Web Services, and Google to run virtual machines with hypervisors like QEMU, Xen (software), and VirtualBox. KVM leverages CPU features from Intel Corporation and Advanced Micro Devices and interacts with kernel subsystems including cpufreq and I/O scheduling.
KVM converts the Linux kernel into a bare-metal hypervisor by providing a kernel module that registers virtual machine devices and contexts visible to user-space managers. Originating from work by Avi Kivity and merged into mainline Linux in 2007 by Linus Torvalds, KVM became a foundation for virtualization stacks deployed by providers such as Amazon EC2, Google Compute Engine, Microsoft Azure (interop), and enterprise distributions from Red Hat and SUSE. It sits alongside other virtualization technologies like Xen (software), VMware ESXi, and Hyper-V in the server and cloud markets.
KVM relies on processor virtualization extensions—Intel VT-x and AMD-V—to implement virtual CPUs (vCPUs) as regular kernel schedulable tasks. The module exposes an API consumed by user-space components including QEMU for device emulation, libvirt for management, and OpenStack for orchestration. Kernel subsystems participating include Kthread (Linux), cgroups, SELinux, Netfilter, and Device Mapper. For storage and networking, KVM integrates with Kernel-based Virtual Machine-compatible drivers like virtio that interface with SCSI stacks, NVMe drivers, and VirtIO network device backends. The control path commonly uses ioctl interfaces, while the data path leverages VFIO for direct device assignment and SR-IOV for high-performance networking with devices from Intel Corporation and Mellanox Technologies.
KVM requires a host running a Linux kernel with support for processor virtualization extensions—Intel VT-x or AMD-V—and chipset features for nested virtualization on supported CPUs from Intel Corporation and Advanced Micro Devices. Platform support spans x86_64 servers from vendors such as Dell Technologies, Hewlett Packard Enterprise, Lenovo, and cloud platforms like Amazon Web Services and Google Cloud Platform. ARM support uses ARMv7 and ARMv8 extensions with implementations by Linaro, Applied Micro Circuits Corporation, and NVIDIA in systems like Jetson for embedded and edge deployments. Storage and networking requirements depend on subsystems like NVMe, SCSI, SR-IOV, and drivers developed by firms including Intel Corporation, Broadcom, and Mellanox Technologies.
KVM provides features including full virtualization via Intel VT-x and AMD-V, paravirtualized I/O through virtio, device assignment via VFIO, live migration compatible with libvirt and OpenStack, snapshotting integrated with QEMU and LVM (Linux) volumes, and nested virtualization relied upon by Kubernetes CI systems and development environments from Google and Red Hat. High-availability and orchestration integrations include Pacemaker, Corosync, and Ceph for distributed storage. KVM supports NUMA awareness used in deployments by Oracle Corporation and SAP for database consolidation, and exposes performance counters compatible with perf and OProfile for profiling workloads like Hadoop, Spark (software), and TensorFlow training.
KVM scales across SMP hosts using kernel scheduler features such as CFS (Completely Fair Scheduler) and hardware offloads including SR-IOV and PCI passthrough to deliver near-native performance in environments deployed by Facebook, Twitter, and Netflix. Benchmarks from organizations like SPEC and institutions using Phoronix Test Suite show competitive throughput for compute, storage, and networking compared with VMware ESXi and Xen (software). Cloud providers Amazon Web Services, Google, and Microsoft Azure use tuned kernel parameters, NUMA pinning, hugepages, and I/O schedulers like BFQ or mq-deadline to optimize latency-sensitive services and scale to thousands of guests on clusters orchestrated by OpenStack and Kubernetes.
KVM leverages kernel-level isolation with mandatory access controls such as SELinux, AppArmor, and namespace isolation as used by Docker (software)-style containers for layered defense. Device isolation uses VFIO and IOMMU support provided by Intel Corporation and Advanced Micro Devices to prevent DMA attacks, with mitigations coordinated with projects like Linux Security Module and research from institutions like CERT and CVE. Security-conscious deployments integrate with key management and attestation services from TPM (Trusted Platform Module) vendors, Intel Software Guard Extensions, and cloud providers’ identity services such as AWS Identity and Access Management.
KVM development occurs in the Linux kernel tree with contributions from corporations like Red Hat, Intel Corporation, IBM, SUSE, and independent developers such as Avi Kivity. The project interfaces with communities and governance models of Kernel.org, OpenStack Foundation, Cloud Native Computing Foundation, and distribution maintainers at Debian, Ubuntu (operating system), and Fedora (operating system). Adoption spans enterprises, cloud providers, research institutions, and vendors including Oracle Corporation, VMware, Inc. (interop), Canonical (company), and national research labs. Maintenance includes regular security patches, performance improvements, and feature backports coordinated through git workflows, LKML discussions, and conferences such as LinuxCon, KVM Forum, and Open Source Summit.
Category:Linux kernel modules