Generated by GPT-5-mini| HPE Apollo | |
|---|---|
| Name | HPE Apollo |
| Developer | Hewlett Packard Enterprise |
| Family | ProLiant |
| Released | 2015 |
| Type | Server |
| Design | Rack-mount, multi-node |
HPE Apollo HPE Apollo is a family of high-density rack servers produced by Hewlett Packard Enterprise designed for large-scale compute, storage, and accelerated workloads. The series targets cloud providers, research institutions, and enterprises requiring topology-optimized systems for Hadoop, OpenStack, Kubernetes, and high-performance computing centers such as those running Top500 applications. HPE positioned the line alongside products from Dell EMC, IBM, Lenovo, and Cray to address workloads across Microsoft Azure, Amazon Web Services, and on-premises installations in national laboratories like Lawrence Livermore National Laboratory.
Apollo was announced during Hewlett Packard Enterprise’s product expansion to serve hyperscale and enterprise clients, following precedents set by Sun Microsystems and Google in density-optimized hardware. The offering spans modular designs inspired by efforts at NASA centers and accelerator deployments similar to those used by CERN experiments and supercomputing projects such as National Energy Research Scientific Computing Center. Hewlett Packard Enterprise marketed Apollo to organizations participating in initiatives like Human Genome Project-scale genomics, Large Hadron Collider data analysis, and simulation workloads for agencies including Department of Energy (United States).
The Apollo family includes multiple series with distinct target profiles: multi-node dense servers for web-scale workloads, GPU-optimized enclosures for machine learning, and storage-dense variants for big data. Notable models paralleled designs from HPE ProLiant DL380 and were often compared with offerings such as Dell PowerEdge R740xd, IBM Power Systems, and Lenovo ThinkSystem nodes. Configurations supported varied processor choices including Intel Xeon generations and later AMD EPYC lines used by cloud providers like Oracle Cloud and research clusters at institutions such as Oak Ridge National Laboratory.
Apollo chassis incorporated modular trays, multi-node motherboards, and custom cooling solutions reminiscent of designs by Cray Research and modern hyperscale vendors. The systems provided dense compute through multi-socket layouts, NVMe bays, and accelerator support for NVIDIA Tesla and later AMD Radeon Instinct accelerators used in artificial intelligence research at universities including MIT and Stanford University. Networking options included high-speed fabrics such as InfiniBand, 100 Gigabit Ethernet, and later 400 Gigabit Ethernet used by cloud operators like Facebook and Twitter. Storage subsystems supported object stores similar to Ceph deployments and tape hierarchies implemented in collaboration with enterprises like IBM Storage.
Apollo platforms were benchmarked for dense HPC, machine learning training, inference at scale, and analytics stacks running Apache Spark, HDFS, and TensorFlow workloads. Performance tuning often referenced optimizations found in LINPACK runs for Top500 submission and comparative studies alongside systems at Argonne National Laboratory and NERSC. Use cases encompassed computational chemistry workflows used at Lawrence Berkeley National Laboratory, seismic modeling for companies like Schlumberger, and financial risk simulations run by firms such as Goldman Sachs and JPMorgan Chase.
HPE integrated Apollo with management tools derived from HPE OneView and supported orchestration via Red Hat OpenShift, VMware vSphere, and Canonical Ubuntu cloud images used by researchers at institutions like Harvard University. The ecosystem included partnerships with software vendors such as NVIDIA CUDA toolchain providers, storage integrators like NetApp, and systems integrators such as Atos and Accenture. Firmware and lifecycle management were coordinated with standards from organizations like Open Compute Project and compliance programs involving National Institute of Standards and Technology guidelines.
Market reception placed Apollo as a high-density alternative to traditional two-socket servers from Dell Technologies and scale-out systems from Google Cloud Platform hardware designs. Analysts compared the family to bespoke solutions from HPE Superdome lines and specialized accelerator clusters from Cerebras Systems. Customers cited total cost of ownership savings versus legacy racks deployed by firms such as McKesson and public sector buyers including United States Department of Defense installations. Over time, competitive dynamics shifted with developments from AMD, Intel, NVIDIA, and hyperscalers like Microsoft driving demand for denser, accelerator-friendly platforms.
Category:Servers Category:Hewlett Packard Enterprise