Generated by GPT-5-mini| PGI Compilers | |
|---|---|
| Name | PGI Compilers |
| Developer | NVIDIA Corporation |
| Released | 1992 |
| Latest release | (varies) |
| Programming language | C, C++, Fortran |
| Operating system | Linux, Microsoft Windows, macOS (historically) |
| License | Proprietary, commercial |
| Genre | Compiler, development toolchain |
PGI Compilers
PGI Compilers are a suite of commercial compilers historically developed for high-performance computing, offering native support for parallelism, vectorization, and accelerator offload. The suite served researchers and engineers in fields relying on supercomputing by providing Fortran, C, and C++ toolchains with OpenMP, MPI, and GPU programming features. The product evolved through mergers and acquisitions, interacting with vendors, research centers, and standards bodies that shape scientific software deployment.
PGI Compilers provided native-language compilers for scientific applications with integrated support for OpenMP, MPI, and GPU offload models. The toolchain emphasized optimization for architectures from vendors such as Intel Corporation, AMD, and NVIDIA Corporation and interfaced with libraries like BLAS, LAPACK, and MPI Forum implementations. Users in national laboratories such as Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Oak Ridge National Laboratory adopted the compilers for codes including LAMMPS, GROMACS, and NAMD. The ecosystem integrated with build systems and debuggers from projects like CMake, GNU Compiler Collection, GDB, and performance tools from Intel VTune Amplifier and NVIDIA Nsight.
Development began in the early 1990s by a commercial entity focused on Fortran and C/C++ optimizations for vector and parallel machines. The compiler lineage intersected with companies and institutions such as The Portland Group, academic research at Stanford University, and collaborations with supercomputer centers like Cray Research and SGI. Over time corporate events involved acquisitions and integration into larger vendors, notably interactions with NVIDIA Corporation and product alignment with accelerator strategies exemplified by partnerships with Microsoft for development environments and ties to open standards from the OpenACC and OpenMP Architecture Review Board. The evolution mirrored broader industry shifts driven by multicore processors from Intel Xeon families, manycore designs from AMD EPYC, and GPU accelerators exemplified by NVIDIA Tesla series.
Architecturally, the compilers implemented front-ends for Fortran 77, Fortran 90, C99, and C++11 with back-ends generating optimized machine code for x86_64 and GPU ISA targets. Key features included interprocedural optimization, whole-program analysis, auto-vectorization tuned for SSE, AVX, and NEON-like extensions, and code generation for accelerator frameworks such as CUDA and OpenACC. The toolchain offered profile-guided optimization compatible with sampling tools from Perf (Linux) and instruction-level tuning similar to techniques used by Intel Compiler suites and research compilers like LLVM. Debugging and analysis integrated with environments like Visual Studio and profiling suites like TAU Performance System and HPCToolkit.
Supported platforms included major distributions of Linux (kernel), enterprise-class servers based on Intel Corporation and Advanced Micro Devices processors, and GPU-accelerated systems using NVIDIA Tesla and NVIDIA A100 hardware. Language support covered legacy and modern scientific languages such as Fortran 77, Fortran 90/95, C99, and C++03/C++11, with extensions to support directive-based offload from OpenACC and interoperability with CUDA kernels. Integrations targeted batch schedulers and resource managers used at HPC centers such as Slurm Workload Manager, PBS Professional, and LSF.
Performance claims centered on accelerating dense linear algebra, stencil computations, and molecular dynamics kernels through loop transformations, data locality optimization, and explicit accelerator offload. Benchmarks often compared runtimes against GNU Compiler Collection and Intel C++ Compiler on suites like SPEC CPU and domain-specific benchmarks used by centers such as NERSC. Optimizations exploited microarchitectural features in processors from Intel and AMD and memory hierarchies present in systems like Cray XC and HPE Apollo. For GPU workloads, improvements relied on mapping compute kernels to CUDA blocks and warps, leveraging shared memory and warp-synchronous programming idioms promoted in literature from NVIDIA Research.
Distribution followed a commercial license model with enterprise subscriptions and site licenses common in academic and government procurement, negotiated with institutions including Department of Energy (United States), National Science Foundation, and major universities like MIT and University of California, Berkeley. Packaging included modulefiles for environment modules used at centers such as NERSC and XSEDE allocation systems. Licensing terms governed support, updates, and integration with proprietary stacks from vendors such as Hewlett Packard Enterprise and Dell EMC.
Within high-performance computing communities, the compilers were valued for pragmatic support of legacy Fortran codes, portability of directive-based offload, and vendor-tuned optimizations. Adoption occurred at DOE labs, national supercomputing facilities, and research groups at institutions including Princeton University, University of Illinois Urbana-Champaign, and Stanford University. Critiques often focused on commercial licensing constraints compared with open-source alternatives like GCC and LLVM Project, while advocates highlighted performance wins on targeted kernels and ease of porting to NVIDIA accelerators. The toolchain influenced pedagogy in computational science courses at universities such as University of Cambridge and ETH Zurich through courseware demonstrating parallel programming models.
Category:Compilers