Generated by GPT-5-mini| HBM (High Bandwidth Memory) | |
|---|---|
| Name | High Bandwidth Memory |
| Type | Synchronous dynamic random-access memory |
| Developer | JEDEC, SK Hynix, Samsung, Micron |
| Introduced | 2013 |
| Form factor | 2.5D/3D stacked die |
| Interface | Wide parallel bus, TSV |
| Capacity | 1 GB–16 GB (per stack, varies) |
| Bandwidth | Up to several hundred GB/s per stack |
| Voltage | 1.2 V (varies) |
HBM (High Bandwidth Memory) is a type of high-performance memory designed for bandwidth-intensive devices such as Graphics Processing Unit, Field-programmable gate array, Supercomputer accelerators and Artificial intelligence systems. It was developed through standards work and industry collaboration involving JEDEC, SK Hynix, Samsung Electronics, and Micron Technology, and it uses three-dimensional stacking and wide interfaces to achieve substantially higher throughput per watt than conventional Double data rate memories. HBM's design emphasizes stacked memory dies, through-silicon vias, and interposer-based connections to processors, enabling compact layouts for devices used in contexts like NVIDIA GPU accelerators, AMD Radeon products, and Intel FPGA platforms.
HBM integrates vertically stacked memory dies connected by through-silicon vias and co-packaged with logic die or processor die on an interposer, a strategy seen in products from NVIDIA, AMD, Intel Corporation, and systems used in Lawrence Livermore National Laboratory and national Oak Ridge National Laboratory supercomputing deployments. The technology contrasts with DDR4, GDDR5, and GDDR6 families by trading raw per-pin frequency for a much wider parallel data interface, a choice that influenced designs adopted by companies such as Sony Corporation for game consoles and Microsoft for high-performance compute modules. Industry standardization efforts led by JEDEC Solid State Technology Association guided interoperability among vendors like SK Hynix and Samsung Electronics while market adoption has been shaped by product launches from NVIDIA Corporation and collaborations involving Advanced Micro Devices.
HBM's architecture relies on stacked memory dies connected with through-silicon vias to form a multi-die cube, and it often places those stacks on an silicon interposer alongside a logic die, as seen in systems by TSMC and packaging technologies promoted by Intel and Amkor Technology. The interface uses a wide parallel bus with multiple byte lanes and per-channel controllers, enabling many gigabytes per second per stack, similar in system goals to designs by SK Hynix and Micron Technology, and drawing on interconnect expertise from Broadcom and Xilinx. Error management includes ECC schemes and signal integrity practices informed by research from MIT, Stanford University, and UC Berkeley, while thermal management practices reflect contributions from IBM Research and cooling system vendors like Asetek.
HBM offers substantially higher bandwidth per watt compared to DDR4 and GDDR5, a property that benefited compute platforms used by Google TPU clusters and Facebook AI infrastructure, and which contributed to its selection in supercomputer nodes at Argonne National Laboratory. The technology reduces trace lengths and pin count per unit bandwidth, affecting board-level power delivery used by motherboard manufacturers such as ASUS, Gigabyte Technology, and MSI. Performance metrics are evaluated in contexts including memory-bound workloads common in DeepMind research, OpenAI model training, and scientific simulations published by CERN, while power profiles inform designs from server OEMs like Dell Technologies and Hewlett Packard Enterprise.
HBM has evolved through multiple generations defined by industry roadmaps and standards work involving JEDEC, SK Hynix, Samsung Electronics, and Micron Technology, with successive versions improving bandwidth, stack height, and power efficiency; these developments paralleled transitions in processor packaging promoted by TSMC and GlobalFoundries. Major generational milestones influenced product launches from NVIDIA (notably in its accelerator lineups), product integrations by AMD in graphics and compute GPUs, and research reported at conferences such as ISSCC and Hot Chips. Standardization ensured compatibility considerations addressed by ecosystem players including Cadence Design Systems and Synopsys.
HBM is used in high-performance GPUs for gaming consoles by Sony Corporation and Microsoft, in accelerator cards for machine learning and deep learning workloads by NVIDIA and Google, and in high-end FPGA boards from Xilinx and Intel. Scientific computing centers such as Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory, and Los Alamos National Laboratory deploy HBM-equipped nodes for climate modeling, computational chemistry, and physics simulations presented at venues like Supercomputing Conference. Content creation workflows from companies like Adobe Systems and real-time rendering in engines like Unreal Engine also leverage HBM-equipped GPUs for texture streaming and large dataset handling.
HBM manufacturing combines memory die fabrication at foundries such as Samsung Electronics and SK Hynix with advanced packaging by companies like ASE Technology Holding and Amkor Technology, and interposer production involving TSMC and silicon-photonics research groups at CEA-Leti. TSV formation, micro-bump bonding, and redistribution layer processes are coordinated with equipment suppliers such as Applied Materials and Lam Research. Integration onto substrates for compute modules is performed by OEMs including Foxconn and Quanta Computer, with quality and yield strategies influenced by semiconductor testing practices from Teradyne.
HBM's adoption has reshaped supply chains involving memory vendors SK Hynix, Samsung Electronics, and Micron Technology, and influenced strategic partnerships among fabless firms like NVIDIA and foundries such as TSMC. The technology's premium cost and packaging complexity affected product roadmaps at OEMs like Dell Technologies and HP Inc., while accelerating research agendas at universities including MIT and Stanford University and prompting policy discussions in technology investment forums such as World Economic Forum panels. HBM's influence continues in next-generation compute, standards evolution at JEDEC, and competitive strategies among semiconductor firms including Intel Corporation and Advanced Micro Devices.
Category:Computer memory