Generated by GPT-5-mini| Open Network Computing | |
|---|---|
| Name | Open Network Computing |
| Developer | Sun Microsystems, ONC Research Group |
| Released | 1980s |
| Programming language | C (programming language), C++ |
| Operating system | Unix, Linux, Microsoft Windows |
| License | BSD license, MIT License |
| Genre | Remote procedure call |
Open Network Computing is a remote procedure call (RPC) framework and set of protocols originally developed to enable transparent procedure calls across networked machines. It provided primitives for remote services, mapping of data representations, and language bindings to allow applications on Unix, VAX, Sun Microsystems workstations and other platforms to interoperate over heterogeneous networks. The system influenced subsequent middleware, distributed computing projects, and standards bodies.
Open Network Computing defines an RPC mechanism, a standard for data serialization, and network services for name and service mapping. It introduced an External Data Representation and a protocol suite used to invoke procedures across machines running different operating systems such as Sun Microsystems's workstations, BSD, System V, and Microsoft Windows ports. The framework influenced implementations in environments managed by vendors like AT&T (company), Hewlett-Packard, Digital Equipment Corporation, and academic projects at institutions such as Massachusetts Institute of Technology and University of California, Berkeley.
The design emerged in the 1980s at Sun Microsystems during an era of expanding networked workstations and heterogeneous LANs driven by technologies like Ethernet, TCP/IP, and the Internet. Early contributions were made by engineers and researchers associated with Sun, and the approach was disseminated through collaborations with UNIX vendors and standards organizations such as IETF and forums attended by participants from AT&T (company), Digital Equipment Corporation, and Hewlett-Packard. Subsequent revisions coincided with the rise of distributed systems research at institutions including Carnegie Mellon University and Stanford University, and with industry efforts embodied in product lines from Sun Microsystems and third-party implementers.
The architecture centers on an RPC runtime, an External Data Representation (XDR) for portable encoding, and transport bindings over TCP (protocol), UDP, and datagram services. The system uses protocol numbers and program/version identifiers managed via a network registry service that parallels directory services used in deployments by companies like Sun Microsystems and IBM. The design separates stub generation, which relies on language-specific compilers for languages such as C (programming language) and C++, from transport and dispatch layers; this separation echoes component models explored at institutions like MIT and projects at Bell Labs. Supporting protocols and extensions addressed authentication and mapping needs similar to efforts by the IETF and implementations in products from Hewlett-Packard and Microsoft Corporation.
Multiple implementations exist across commercial, open-source, and academic ecosystems. Notable codebases include implementations bundled with SunOS and later Solaris (operating system), ports in BSD variants, and third-party projects integrated into Linux distributions. Toolchains for stub generation and runtime libraries were produced by vendors such as Sun Microsystems, Hewlett-Packard, IBM, and independent projects hosted by organizations like Free Software Foundation and developer communities around NetBSD and OpenBSD. Commercial middleware vendors incorporated the model into products alongside other RPC and object models such as those from Microsoft Corporation and object request brokers developed in standards forums like OMG.
Security models originally relied on authentication mechanisms and host-based controls common in Sun Microsystems deployments; later implementations integrated stronger mechanisms inspired by standards from IETF and authentication systems like Kerberos (protocol). Limitations include challenges in firewall traversal, stateless transport semantics over UDP leading to reliability concerns, and difficulties adapting to firewall and NAT scenarios that emerged with the growth of the Internet and enterprise networks run by organizations such as Cisco Systems. Performance trade-offs between synchronous RPC semantics and asynchronous messaging systems prompted comparisons with message-oriented middleware used by enterprises like IBM and academic alternatives developed at MIT and Carnegie Mellon University.
The framework was widely used for networked file and service access on workstation clusters, integration of heterogeneous systems in data centers operated by companies like Sun Microsystems and Digital Equipment Corporation, and academic testbeds at institutions such as University of California, Berkeley and Stanford University. Typical applications included remote filesystem protocols, network lookup services, distributed print services, and inter-process communication in enterprise environments managed by vendors like Hewlett-Packard and IBM. It also served as a foundation for experimenting with distributed algorithms in research groups at Carnegie Mellon University and in networked applications developed at MIT.
The approach influenced and coexisted with standards and efforts led by bodies such as the IETF, and work on data representation informed later formats and middleware standards referenced by organizations like ISO and the OMG. Interoperability was achieved via standardized XDR definitions, program/version registries, and vendor-provided bindings that allowed systems from Sun Microsystems, IBM, Hewlett-Packard, and Microsoft Corporation to interoperate in mixed environments. The model's legacy persists in later RPC and remote invocation standards adopted in heterogeneous infrastructures run by enterprises and academic consortia.