Generated by GPT-5-mini| x87 | |
|---|---|
| Name | x87 |
| Designer | Intel |
| Introduced | 1980s |
| Architecture | x86 floating-point coprocessor stack |
| Predecessor | Intel 8087 |
| Successor | SSE |
| Type | floating-point unit |
x87 The x87 floating-point extension is a family of floating-point coprocessor technologies integrated into the x86 ecosystem, originating with Intel's 8087 and evolving through the 80287, 80387, and into integrated units in Pentium-class and later processors. It provided hardware support for floating-point arithmetic, transcendental functions, and a register-stack model used by compilers, assemblers, and operating systems across platforms such as IBM PC, Sun workstations, and UNIX systems. Implementations influenced numerical libraries, scientific computing, and graphics pipelines in companies and projects including Bell Labs, CERN, NASA, and the GNU Project.
The origin of the technology traces to Intel's development cycle alongside the Intel 8086 and Intel 8088 microprocessors, with the Intel 8087 released to accelerate workloads from firms like Hewlett-Packard and Texas Instruments. Subsequent products—Intel 80287 and Intel 80387—targeted the IBM PC/AT and workstation markets serving customers such as Sun Microsystems, DEC, and SGI. During the 1990s, integration into the Intel Pentium processor and rivalry with AMD influenced standards adopted by the IEEE 754-1985 committee and implementations in systems from Compaq, Oracle (formerly Sun), and research centers like Los Alamos National Laboratory. Compiler vendors—GCC, Microsoft Visual C++, and Borland—adapted calling conventions for language standards including ANSI C, ISO C90, and Fortran 77, affecting numerical reproducibility for projects such as LAPACK, BLAS, and MATLAB. The x87 model coexisted and later competed with SIMD extensions like MMX and SSE introduced by Intel and adopted by Apple and IBM in various platforms.
The architecture centers on a register stack of eight 80-bit floating-point registers addressed as a top-of-stack relative stack used by CPUs and chipsets including models from Intel, AMD, and Cyrix. Control and status words, tag registers, instruction pointers, and data pointers form part of the state saved and restored by operating systems such as Windows NT, Linux, and BSD kernels during context switches and signal handling in environments like POSIX and SVR4. Microarchitecture implementations in families such as P5, P6, and NetBurst varied, influencing pipeline interactions with bus controllers from vendors like Intel Architecture Labs and cache coherence protocols used in multiprocessor servers from Sun Microsystems and HP. The unit supports operand memory addressing via segmentation and paging mechanisms in processors compliant with IA-32 and later x86-64 mode transitions used by hypervisors such as Xen and VMware ESX.
The instruction repertoire comprises arithmetic, comparison, load/store, control, and transcendental instructions inherited from early coprocessor manuals used by assemblers such as MASM, NASM, and GAS. Instructions include floating add, subtract, multiply, divide, square root, and fused operations, as well as FSAVE/FRSTOR-like state management primitives used by operating systems and runtime libraries from Microsoft, Sun, and Red Hat. The instruction semantics were influential in standardization efforts by IEEE committees and appeared in compiler backends for GCC, LLVM, and Intel C Compiler. Interaction with calling conventions defined by System V AMD64 ABI and Microsoft x64 calling convention affected how high-level languages such as Fortran, C++, and Pascal implemented floating-point function calling and exception handling.
The unit natively used an 80-bit extended precision format combining a 64-bit significand and 15-bit exponent plus sign, enabling wide dynamic range used in numerical analysis and symbolic computation at institutions like MIT, Stanford University, and Princeton University. It also supported 32-bit single-precision and 64-bit double-precision formats compatible with IEEE 754 representations employed by scientific software packages such as SciPy, GNU Octave, and R. Control registers permit rounding modes and precision control, affecting reproducibility for projects including NumPy, IDL, and bespoke simulation codes used at CERN and Lawrence Livermore National Laboratory.
Programmers accessed the instruction set via inline assembly in toolchains such as GCC and Microsoft Visual Studio, through compiler intrinsics, or via high-level math libraries like libm and vendor-optimized libraries from Intel MKL and AMD ACML. Language runtimes for Java Virtual Machine implementations, [.NET CLR], and scripting environments such as Python and Perl interacted with the floating-point environment when executing numerical kernels in applications originating from companies like MathWorks and Wolfram Research. Operating systems from Microsoft, Apple, and various Linux distributions include wrappers and ABI contracts to manage the coprocessor state across signals, threads, and context switches.
Performance characteristics depended on microarchitecture, pipeline depth, and on-chip integration found in families from Intel and AMD, with later SIMD extensions from SSE2 and AVX offering higher throughput for vectorized workloads used in content from Adobe Systems and game engines from id Software and Epic Games. Compatibility layers and emulators like DOSEMU, Bochs, and QEMU emulate the instruction set for legacy software from MS-DOS and Windows 3.1 era applications, while modern compilers often lower floating-point operations to SSE or AVX for performance, affecting numerical behavior relevant to reproducibility studies at institutions like NIST and LANL. The evolution from coprocessor chips to integrated units shaped performance trade-offs for scientific computing, financial modeling, and multimedia processing across vendors and standards bodies including ISO and IEEE.
Category:Floating-point processors