Generated by GPT-5-mini| LAPACK | |
|---|---|
| Name | LAPACK |
| Author | Jack Dongarra, Jim Bunch, Cleve Moler, et al. |
| Released | 1990 |
| Programming language | Fortran |
| Platform | Supercomputers; Workstations |
| Genre | Numerical linear algebra library |
| License | BSD-like |
LAPACK
LAPACK is a software library for numerical linear algebra providing routines for solving systems of linear equations, eigenvalue problems, and singular value decompositions. It succeeds earlier work by teams from institutions such as Argonne National Laboratory, University of Tennessee, Oak Ridge National Laboratory, Lawrence Berkeley National Laboratory and builds on routines originating in projects like NAG Library and EISPACK. The project influenced and was influenced by efforts at centers including IBM, Cray Research, Intel Corporation and Microsoft Research.
LAPACK originated from collaborations involving researchers such as Jack Dongarra, Jim Bunch, and Cleve Moler, drawing on prior work at Argonne National Laboratory, Oak Ridge National Laboratory, and Lawrence Livermore National Laboratory and relying on algorithms from EISPACK, LINPACK, and the BLAS project. Funding, coordination, and dissemination involved agencies and programs such as the National Science Foundation, Department of Energy, and international partners including European Organisation for Nuclear Research and National Institute of Standards and Technology. Early releases targeted vector and shared-memory machines produced by vendors like Cray Research and IBM, and subsequent evolution paralleled developments at Intel Corporation, AMD, and research centers such as Los Alamos National Laboratory and Sandia National Laboratories.
The architecture of the library emphasizes block-oriented algorithms and reliance on a standardized kernel interface manifested by the BLAS specification, informed by work at Argonne National Laboratory and Netlib. The design adopted Fortran 77 as an initial implementation language, coordinated with compiler and optimizer teams at IBM and Intel Corporation, and later interfaced with modern projects at GNU Project and OpenMP initiatives. Scalability and portability were achieved by structuring routines to exploit memory hierarchies on hardware from Cray Research, Fujitsu, and vendors collaborating with supercomputing centers such as Oak Ridge National Laboratory and Lawrence Berkeley National Laboratory.
The library implements algorithmic families for direct methods including LU, Cholesky, and QR factorizations developed in parallel with theoretical advances from mathematicians associated with Courant Institute and Massachusetts Institute of Technology. Eigenvalue and singular value algorithms were refined using techniques related to the QR algorithm and bidiagonal reduction studied at Princeton University and Stanford University. Routines handle dense matrices, utilize blocking strategies influenced by work at University of Tennessee and University of California, Berkeley, and interoperate with sparse solvers produced by projects at Sandia National Laboratories and Lawrence Livermore National Laboratory.
Implementations have been optimized for architectures produced by Intel Corporation, AMD, and IBM, with tuned BLAS variants provided by vendors and third parties such as ATLAS, OpenBLAS, and Intel Math Kernel Library. Performance engineering engaged compiler and hardware research groups at Microsoft Research and NVIDIA to exploit vector instructions and multicore parallelism through standards like OpenMP and MPI demonstrated at facilities like Argonne National Laboratory and Oak Ridge National Laboratory. Benchmarking and performance studies often reference comparisons in high-performance computing venues hosted by ACM and SIAM and presented at conferences organized by IEEE and SC Conference Committee.
Bindings and interfaces exist for languages and environments such as C (programming language), C++, Python (programming language), MATLAB, and R (programming language), facilitated by projects at institutions like Netlib, SciPy, The MathWorks, and RStudio. Interoperability with parallel frameworks and middleware was advanced through collaborations with developers of MPI, OpenMP, and accelerated libraries from NVIDIA and Intel Corporation. Packaging and distribution efforts involved organizations such as Debian, Red Hat, and Conda (package manager) ecosystems.
Practitioners in domains including computational physics at CERN, climate modeling at NOAA, structural engineering consulted at NASA, and quantitative finance at Goldman Sachs have relied on the library’s routines. Scientific software stacks at centers like Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, and universities such as Stanford University and Massachusetts Institute of Technology integrate the library for tasks in finite element analysis, data assimilation, machine learning pipelines, and high-fidelity simulations presented at venues like SIAM Conference on Computational Science and Engineering and International Conference for High Performance Computing, Networking, Storage and Analysis.
Development has been coordinated through repositories and distribution networks such as Netlib and collaborations among contributors at University of Tennessee, Oak Ridge National Laboratory, and Argonne National Laboratory, with governance shaped by academic and industrial partners including IBM, Intel Corporation, and NVIDIA. Licensing permitted broad reuse compatible with open-source ecosystems promoted by Free Software Foundation and package maintainers at Debian and Conda (package manager), while community activity thrives in forums and workshops sponsored by SIAM, ACM, and national laboratories.
Category:Numerical linear algebra software