Generated by GPT-5-mini| libm | |
|---|---|
| Name | libm |
| Title | libm |
| Developer | Various vendors and contributors |
| Released | 1970s |
| Latest release | Ongoing |
| Programming language | C, assembly |
| Operating system | Unix-like, Windows, BSD, Linux |
| Genre | Software library |
libm libm is the standard mathematical library traditionally associated with the C programming language, providing floating-point and integer mathematical functions for applications and runtime environments. It underpins numerical computation in compilers, operating systems, scientific applications, and embedded systems, linking closely with toolchains, standards bodies, and processor vendors. Implementations have been produced by academic groups, corporations, and open source projects and are integrated into ecosystems maintained by foundations and vendors.
libm supplies elementary functions such as trigonometric, exponential, logarithmic, power, and rounding operations used by compilers and runtime libraries in systems like Unix, Linux, Windows NT, FreeBSD, and NetBSD. It interfaces with language standards such as ISO/IEC 9899 and with language runtimes such as GNU Compiler Collection, Clang (compiler), Microsoft Visual C++, Intel C++ Compiler. Implementations often target architecture-specific features from vendors such as Intel, AMD, ARM Limited, and IBM while conforming to behavior specified by organizations like IEEE 754-2008, ISO/IEC JTC1/SC22/WG14 and by conformance suites used by projects including GNU Project, Free Software Foundation, and OpenBSD.
Early libm implementations emerged in the 1970s alongside research at institutions exemplified by Bell Labs, University of California, Berkeley, and projects like Unix Version 7. Vendors such as Sun Microsystems, Digital Equipment Corporation, Hewlett-Packard, and DEC contributed platform-tuned libraries during the 1980s and 1990s. Standards and floating-point semantics from IEEE 754-1985 and later IEEE 754-2008 drove evolution, while toolchain coordination among GNU Project, Netwide Assembler, and Intel led to optimized kernels. Academic contributions from groups at MIT, Stanford University, and University of Cambridge influenced algorithm selection, and initiatives like Numerical Recipes and researchers such as William Kahan informed correct rounding and error analysis.
The libm API typically exposes functions declared in headers standardized by ISO/IEC 9899 such as math prototypes used by POSIX-compliant environments and by runtime libraries like glibc and musl libc. Core functions include families of sine, cosine, tangent, exponential, logarithm, square root, and power routines used in applications from MATLAB-based workflows to NumPy-backed scientific stacks. Language front-ends in projects like GCC, Clang (compiler), Microsoft Visual C++, and Intel C++ Compiler rely on libm symbols for code generation and link-time optimization. Extensions and intrinsics provided by vendors such as ARM Limited and Intel map math operations to vector instruction sets like AVX, NEON, and SVE.
Implementations include vendor and open source variants: glibc’s math library, musl libc math, BSD libc math in FreeBSD and OpenBSD, Microsoft Visual C++ runtime math, and optimized libraries from Intel and AMD. Specialized projects and math libraries include CRlibm, FDLIBM, Sleef, Boost C++ Libraries math components, and commercial math packages from IBM and NVIDIA. Vectorized and high-performance variants appear in Intel Math Kernel Library, AMD LibM_fast_math, and in projects targeting GPUs like CUDA math libraries and ROCm. Implementations also differ by language binding in projects such as Python (programming language), Julia (programming language), and R (programming language) which wrap native libm functions.
libm implementations balance throughput, latency, and accuracy, with trade-offs managed by compiler flags in toolchains like GCC, Clang (compiler), and ICC. Math function kernels use polynomial approximations (e.g., minimax), argument reduction, table lookup techniques and range reduction strategies developed in research at University of California, Berkeley, Stanford University, and INRIA. Performance tuning leverages microarchitecture details from Intel microarchitectures (e.g., Sandy Bridge, Haswell), AMD Zen, and ARM Cortex cores; vectorized implementations exploit AVX2, AVX-512, and NEON. Accuracy targets reference IEEE 754-2008 correctly rounded requirements and test suites developed by organizations such as NIST, with some projects favoring speed over strict rounding (fast-math modes) as seen in GCC’s -ffast-math and in vendor fast libraries.
Conformance and testing rely on suites and frameworks from standards and research bodies including NIST, CRlibm testbeds, and tools like valgrind and ASan for memory safety. Reproducibility and correctness use test cases inspired by work from William Kahan and test harnesses in glibc and musl libc continuous integration systems. Certification and verification efforts reference formal methods from projects at Microsoft Research, INRIA, and IMDEA Software Institute and leverage numerical verification tools used in academic studies at University of Illinois and ETH Zurich.
Operating systems and toolchains integrate libm in core distributions: Linux distributions bundle glibc or musl libc math libraries; BSD distributions ship FreeBSD and OpenBSD implementations; proprietary systems rely on Microsoft Windows runtime libraries or vendor stacks from Oracle Corporation and Sun Microsystems derivatives. Compiler toolchains such as GNU Compiler Collection, Clang (compiler), Intel C++ Compiler and build systems like CMake and Autotools link libm during application builds. Performance-sensitive ecosystems—numerical libraries like BLAS, LAPACK, OpenBLAS, and scientific environments like SciPy—depend on consistent libm behavior across platforms and coordinate with vendors such as NVIDIA, Intel, and organizations like Linux Foundation for reproducible numerical behavior.
Category:Mathematical libraries