Generated by GPT-5-mini| PCI Express | |
|---|---|
![]() Dmitry Nosachev · CC BY-SA 4.0 · source | |
| Name | PCI Express |
| Abbreviation | PCIe |
| Developer | Intel Corporation |
| Introduced | 2003 |
| Predecessor | PCI, AGP |
| Type | Serial expansion bus |
| Caption | Peripheral Component Interconnect Express card |
PCI Express PCI Express is a high-speed serial computer expansion bus standard developed to replace parallel PCI and AGP on personal computers and servers. It was specified by a consortium led by Intel Corporation and standardized through industry groups; PCI Express provides scalable lanes, point-to-point topology, and features for modern Microsoft Windows and Linux operating systems, as well as server platforms from Dell Technologies, Hewlett Packard Enterprise, and Lenovo.
PCI Express originated from efforts by Intel Corporation after the limitations of PCI and AGP became apparent in the early 2000s. The specification was promoted by the PCI-SIG working group, which includes members such as AMD, NVIDIA, Broadcom Inc., Marvell Technology Group, and IBM. Major adopters included desktop vendors like ASUS, Gigabyte Technology, and workstation makers such as HP and Dell Technologies. The ecosystem spans chipset manufacturers including Intel Corporation, AMD, switch vendors like Arista Networks, and operating system integrators such as Red Hat and Canonical.
PCI Express employs a layered architecture including Transaction, Data Link, and Physical layers; the stack interacts with chipset logic from Intel Corporation and AMD. Its packet-based protocol replaces shared-bus arbitration used in PCI and leverages point-to-point links adopted by platforms like Intel Xeon servers and AMD EPYC systems. The Transaction Layer generates Transaction Layer Packets (TLPs) compatible with I/O virtualization features used by VMware, Inc., Microsoft Hyper-V, and KVM. The Data Link Layer provides sequence numbers and a CRC similar to mechanisms in Ethernet implementations by Cisco Systems and Juniper Networks. The Physical Layer defines electrical and logical signaling akin to serial fabrics used by Serial ATA and USB-IF designs. Error reporting and management integrate with system firmware from AMI and Insyde Software and are exposed to system management frameworks like OpenBMC.
PCI Express has evolved through multiple generations with increasing per-lane data rates implemented by semiconductor manufacturers such as Intel Corporation, AMD, NVIDIA, and Broadcom Inc.. Early desktop and server platforms implemented Gen1 and Gen2 rates adopted by vendors like ASUS and Gigabyte Technology. Subsequent generations—Gen3, Gen4, Gen5, and Gen6—are used in products from Intel Corporation and AMD server ecosystems including Supermicro motherboards and cloud providers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. Each generation doubled raw bit rates per lane, affecting x1, x4, x8, and x16 link configurations common in graphics cards from NVIDIA and AMD Radeon. Enterprises deploying high-performance storage used NVMe drives from Samsung Electronics and Western Digital Corporation which exploit multi-lane throughput. Switches and retimers from Texas Instruments, Intel Corporation, and Broadcom Inc. help maintain signal integrity at higher generations.
The PCI Express Physical Layer defines electrical, logical, and mechanical interfaces used by motherboards from ASUS, MSI, and Gigabyte Technology. Standard edge connectors—x1, x4, x8, x16—match slot geometries on consumer and server mainboards from Dell Technologies and HP. Smaller form factors, adopted by laptop OEMs such as Lenovo and Acer, include M.2 and U.2 implementations that map PCIe lanes to storage form factors used by Samsung Electronics and Intel Corporation NVMe SSDs. Signal conditioning and retimers from Texas Instruments and Analog Devices, Inc. are integrated into chassis and add-in cards produced by ASRock. Optical and cable-based extensions from companies like Amphenol Corporation and Molex enable remote IO in data centers run by Equinix and Digital Realty.
PCI Express defines power delivery limits for slot-powered devices on motherboards designed by ASUS, Gigabyte Technology, and MSI. Graphics cards from NVIDIA and AMD often require auxiliary power connectors standardized by manufacturers such as Corsair for power supplies. Low-profile cards and embedded modules appear in systems by Intel Corporation and AMD partners including Zotac and ASRock Industrial. Form factors like full-height, half-height, M.2, and E1.S are used in servers from Supermicro and storage arrays from NetApp and Dell EMC. Power management features integrate with platform firmware standards from UEFI Forum and server management from Red Hat and Hewlett Packard Enterprise.
PCI Express is used broadly in consumer graphics cards from NVIDIA and AMD Radeon, storage devices such as NVMe SSDs from Samsung Electronics and Western Digital Corporation, high-speed networking from Intel Corporation and Broadcom Inc., and accelerator cards from Xilinx (now AMD), NVIDIA, and Intel Corporation. Cloud providers including Amazon Web Services, Google Cloud Platform, and Microsoft Azure deploy PCIe-based accelerators in hyperscale racks. Telecommunication equipment vendors like Huawei and Ericsson integrate PCIe in baseband processing systems, while industrial vendors such as Siemens and Schneider Electric use PCIe in embedded controllers. Scientific computing centers at institutions like CERN and national labs running clusters from Cray (now Hewlett Packard Enterprise) rely on PCIe for GPU and FPGA attachments.
PCI Express maintains backward and forward compatibility at the protocol level across generations, enabling devices from NVIDIA, AMD, and Intel Corporation to interoperate on motherboards by ASUS, Gigabyte Technology, and MSI. The PCI-SIG publishes compliance programs used by vendors such as Broadcom Inc. and Marvell Technology Group to certify interoperability. Operating system support from Microsoft Windows, Linux, and virtualization platforms like VMware, Inc. ensures functional integration with device drivers from NVIDIA, Intel Corporation, and AMD. Data center interoperability testing is performed by consortiums including Open Compute Project and commercial integrators such as HPE and Dell Technologies.
Category:Computer buses