LLMpediaThe first transparent, open encyclopedia generated by LLMs

Jupiter (network)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Borg (cluster manager) Hop 5
Expansion Funnel Raw 83 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted83
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Jupiter (network)
NameJupiter (network)
TypeNetwork architecture
DeveloperIntel Corporation; Google LLC; Facebook, Inc. contributors
First release2010s
Written inC; C++; P4; eBPF
Operating systemLinux; Bare metal deployments
PlatformData center fabric; cloud; high-performance computing

Jupiter (network)

Jupiter (network) is a high-performance data center fabric and networking architecture designed for hyperscale hyperconvergence and large-scale cloud computing environments. It emphasizes low-latency switching, Clos fabric topologies, programmable pipeline features, and converged traffic engineering for services ranging from web search to distributed storage and machine learning workloads. The design influenced and was influenced by research from Google LLC, Microsoft Corporation, Amazon, and academic projects at Stanford University and University of California, Berkeley.

Overview

Jupiter originated as a blueprint for building massive leaf-spine fabrics that interconnect thousands of servers using commodity high-port-count switches, leveraging technologies like Ethernet, Remote Direct Memory Access, RDMA over Converged Ethernet, and large forwarding tables. The architecture targets operators of data centers such as Facebook, Inc., Google LLC, Microsoft Corporation, Amazon, and telecommunication providers such as AT&T, Verizon Communications, and NTT. Jupiter integrates control-plane approaches from Border Gateway Protocol, distributed systems concepts from Chubby (service), and orchestration techniques seen in Kubernetes and OpenStack to provide multipath routing, fault tolerance, and incremental upgrades.

Architecture and Protocols

Jupiter typically employs a multi-stage Clos topology with leaf, spine, and super-spine layers informed by research at Cornell University and Massachusetts Institute of Technology. The data plane uses high-density switching silicon from vendors like Broadcom Inc., Intel Corporation, Marvell Technology Group, and programmable languages such as P4 and eBPF for telemetry and packet steering. Control-plane elements borrow from OpenFlow, Segment Routing, and BGP extensions while integrating software-defined networking controllers such as ONOS (software), OpenDaylight, and proprietary systems from Google LLC and Facebook, Inc.. Congestion control mechanisms derive concepts from TIMELY (congestion control), DCTCP, and LEDBAT, and flow-management leverages techniques from ECMP and BGP-EVPN.

Deployment and Use Cases

Operators deploy Jupiter-style fabrics in hyperscale data centers to support applications including distributed search, object storage, content delivery networks run by Akamai Technologies, large-scale databases like Spanner (Google) and Cassandra, and machine learning clusters using TensorFlow and PyTorch. Telecommunication providers integrate similar topologies for edge compute and 5G backhaul with vendors such as Nokia, Ericsson, and Huawei. Research installations appear in universities collaborating with industry partners like Intel Corporation and NVIDIA Corporation for GPU-accelerated AI training clusters. Cloud providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure adopt Jupiter-derived principles for tenant isolation, virtual networking, and live migration services.

Performance and Scalability

Jupiter architectures scale horizontally to tens of thousands of servers by exploiting Clos properties proven in studies from Stanford University and scalability patterns employed by Google LLC and Facebook, Inc.. Performance tuning uses telemetry systems inspired by Prometheus (software), hardware counters from Broadcom Inc. ASICs, and in-band network telemetry similar to INT (In-band Network Telemetry). Latency-sensitive workloads benefit from RDMA and kernel-bypass stacks like DPDK and Netmap, while throughput for bulk transfers uses congestion control informed by TCP Cubic and DCTCP. Real-world deployments have documented improvements in flow completion time and link utilization comparable to other hyperscale fabrics.

Security and Privacy

Security practices in Jupiter-style networks combine access control lists from vendors such as Cisco Systems, segmentation via VXLAN and NVGRE, and authentication using IEEE 802.1X and RADIUS. Control-plane integrity leverages cryptographic extensions similar to BGPSEC and secure channeling akin to IPsec and TLS for controller-switch communication, while telemetry and logging integrate with Splunk and ELK Stack for auditability. Privacy considerations for tenant traffic align with models adopted by Amazon Web Services and Google Cloud Platform for multi-tenant isolation and compliance with regulatory regimes such as GDPR and HIPAA.

History and Development

Design principles that shaped Jupiter emerged from academic work at University of California, Berkeley, Stanford University, and industry research groups at Google LLC and Facebook, Inc. in the 2010s, following earlier fabric designs like Jellyfish and projects at Microsoft Research. Key innovations include scaling control planes, large TCAM optimization, and programmable pipelines inspired by OpenFlow research at Nicira (company) and scholarly publications in venues like SIGCOMM and NSDI. Subsequent iterations incorporated contributions from silicon vendors Broadcom Inc. and software projects hosted by Linux Foundation initiatives such as ONAP and Open Network Automation Platform.

Comparison with Other Networks

Compared with traditional three-tier designs used by enterprises like IBM and Hewlett-Packard Enterprise, Jupiter-style fabrics prioritize east-west bandwidth and fabric-wide programmability similar to Google B4 and Facebook Fabric efforts. Against alternative architectures such as Jellyfish and FatTree, Jupiter emphasizes operational simplicity, deterministic performance, and vendor integration akin to solutions by Arista Networks and Cumulus Networks. In contrast to wide-area routing frameworks such as those in Verizon Communications and AT&T, Jupiter focuses on intra-data-center scale and tight integration with orchestration platforms including Kubernetes and OpenStack.

Category:Computer networking