Generated by GPT-5-mini| KVM | |
|---|---|
| Name | KVM |
| Developer | Qumranet; maintained by Linux Kernel Organization |
| Initial release | 2007 |
| Programming language | C (programming language) |
| Operating system | Linux |
| License | GNU General Public License |
KVM is a virtualization technology integrated into the Linux kernel that enables multiple virtual machines to run unmodified guest operating systems by exposing hardware virtualization extensions. It originated as a project to leverage processor features from Intel and AMD and has become a core component in cloud computing stacks used by vendors such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. KVM combines kernel-level device emulation with user-space management tools from communities and vendors like QEMU, Red Hat, and Canonical.
KVM began as a project by Qumranet to bring virtualization to the Linux kernel using hardware-assisted virtualization features present in Intel VT-x and AMD-V processors. After its upstreaming to the Linux Kernel Organization in 2007, contributors from companies such as Red Hat, IBM, Intel, AMD, and SUSE expanded its capabilities, integrating with projects including QEMU, libvirt, and systemd. KVM’s development interacted with events like the consolidation of virtualization stacks in enterprise distributions from Red Hat Enterprise Linux and Ubuntu, and with standards work at organizations such as the OpenStack community, the Linux Foundation, and the OpenStack Foundation. Over time, KVM influenced and was influenced by cloud initiatives from Amazon, innovations in container orchestration from Kubernetes, and virtualization research at institutions like MIT and Stanford University.
KVM is architected as a set of kernel modules that expose a virtualization interface and rely on hardware features in processors from Intel Corporation and Advanced Micro Devices. The core modules, kvm.ko and architecture-specific kvm-intel.ko or kvm-amd.ko, create a per-VM kernel execution context and expose device nodes consumed by user-space emulators such as QEMU. The architecture separates responsibilities: the kernel implements CPU and memory virtualization primitives, while user-space implements device models and I/O using projects like QEMU and management libraries such as libvirt. Integration points include the VFIO framework for secure device assignment, the VirtIO paravirtualized device standard, and kernel subsystems like cgroups and NUMA support. The modular design allows orchestration systems like OpenStack and Kubernetes (via KubeVirt) to schedule VMs alongside containers.
KVM implements hardware-assisted virtualization features, leveraging Intel VT-x, AMD-V, and nested virtualization extensions from Intel and AMD to support running guest hypervisors. It provides memory management via the MMU virtualization and supports huge pages and NUMA topology awareness. Paravirtualization features are supplied through the VirtIO standard implemented in guest drivers by vendors including Red Hat, Canonical, and SUSE. I/O and device assignment use VFIO for safe DMA and interrupt remapping backed by IOMMU hardware such as Intel VT-d and AMD-Vi. KVM integrates with clock and timekeeping facilities originally developed by projects like NTP and Chrony for guest time sync and exposes features such as live migration, snapshotting, and memory ballooning implemented by user-space tools like QEMU and managed by libraries like libvirt.
Administrators interact with KVM through user-space tools including QEMU, libvirt, virt-manager, and orchestration platforms such as OpenStack and oVirt. Provisioning workflows often use configuration management systems like Ansible (software), Puppet (software), and Chef (software) to automate VM lifecycle operations. Cloud providers integrate KVM into control planes provided by OpenStack or bespoke systems as seen in Amazon EC2 and Google Compute Engine offerings. Management also leverages monitoring and observability stacks from Prometheus, Grafana, ELK Stack, and logging agents maintained by projects such as Fluentd. Backup and disaster recovery workflows interoperate with storage projects like Ceph and GlusterFS.
KVM performance depends on CPU virtualization features from Intel and AMD, memory subsystem characteristics such as NUMA and huge pages, and I/O path optimizations including VirtIO, paravirtualized drivers, and SR-IOV support from NIC vendors like Intel Corporation and Broadcom. Benchmarking typically uses suites and workloads from projects including SPEC and community tools such as Phoronix Test Suite, as well as cloud-native benchmarks from Cloud Native Computing Foundation. Metrics capture CPU overhead, network throughput with SR-IOV and VirtIO, and storage latency with backends like Ceph, LVM, and ZFS. Optimizations often tune kernel parameters influenced by work from Linux Kernel Organization contributors and vendor whitepapers from Intel and Red Hat.
KVM’s security posture relies on kernel integrity and hardware features like Intel VT-d and AMD-Vi for direct device isolation, and on frameworks such as VFIO for safe device assignment. Hardening is pursued through kernel security modules and mitigation efforts coordinated by organizations like Kernel.org and vendors including Red Hat and Canonical. Attack surface reductions use minimal device models, sandboxing of user-space emulators, and secure boot chains involving UEFI and Trusted Platform Module. Vulnerability disclosure and remediation follow processes led by entities such as CVE and the Common Vulnerabilities and Exposures program, with security advisories published by Linux distributors and cloud providers.
KVM is widely adopted across enterprise and cloud ecosystems, supported by vendors such as Red Hat, Canonical, SUSE, IBM, and cloud providers like Amazon Web Services and Google Cloud Platform. Integration with orchestration and management projects like OpenStack, oVirt, and KubeVirt enables mixed workloads alongside container platforms like Kubernetes. The ecosystem includes tooling from QEMU, libvirt, virt-manager, storage solutions such as Ceph and GlusterFS, and networking integrations with Open vSwitch and SDN projects like OpenDaylight. Academic and industry research at institutions including MIT and Stanford University continues to explore nested virtualization, performance isolation, and heterogeneous accelerators in KVM-based environments.
Category:Virtualization