Generated by GPT-5-mini| iVDGL | |
|---|---|
| Name | iVDGL |
| Formation | 2001 |
| Dissolution | 2006 |
| Purpose | Grid computing research and infrastructure |
| Headquarters | United States |
| Region served | International |
| Parent organization | Open Science Grid |
iVDGL
iVDGL was a distributed computing initiative that provided a transcontinental grid computing testbed linking major research institutions and national laboratories across the United States, Europe, and Asia. It served as an integration point for projects sponsored by National Science Foundation, collaborations with Lawrence Berkeley National Laboratory, Fermilab, and coordination with international partners such as CERN and Argonne National Laboratory. The project fostered software development, experiment support, and cross-domain interoperability with initiatives like TeraGrid and Open Science Grid.
iVDGL created an experimental environment for middleware and application developers to test distributed computing services across sites including University of Chicago, University of Illinois Urbana–Champaign, University of California, Berkeley, Stanford University, Massachusetts Institute of Technology, and California Institute of Technology. It connected compute resources from national laboratories such as Brookhaven National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, and Sandia National Laboratories to international centers like CERN, European Organization for Nuclear Research, and National Center for Supercomputing Applications. The project emphasized interoperability with software stacks developed by Globus Alliance, Condor Project, USC Information Sciences Institute, and commercial partners such as IBM, Sun Microsystems, and HP.
The initiative emerged in the aftermath of early grid efforts like GriPhyN and PVM experiments, with funding streams from the National Science Foundation and coordination among academic consortia including University of Michigan and University of Wisconsin–Madison. Early milestones included integration tests with middleware from the Globus Toolkit, experiments in data movement using GridFTP, and scheduling interoperability with Condor. iVDGL collaborated on large-scale demonstrations at events affiliated with Supercomputing Conference and provided infrastructure for experimental campaigns linked to projects such as ATLAS experiment and Large Hadron Collider commissioning activities at CERN. By mid-2000s the initiative merged efforts into wider federations like the Open Science Grid and influenced policy at agencies including the Department of Energy.
The testbed architecture combined resource providers at universities and laboratories running compute clusters, storage arrays, and network links coordinated via middleware from the Globus Toolkit, including components such as GRAM and GridFTP for job submission and data transfer. Authentication and authorization employed protocols interoperable with Kerberos V5 realms and certificate systems from IETF Internet standards and OpenSSL toolchains. Workload management integrated schedulers like Condor and site local batch systems such as PBS Professional and SLURM clusters deployed at centers like Oak Ridge National Laboratory and Argonne National Laboratory. Monitoring and accounting leveraged software concepts from Nagios and logging frameworks analogous to Logstash while collaborating with networking initiatives like Internet2 and National LambdaRail.
iVDGL operated heterogeneous testbeds across campuses including University of California, San Diego, University of Florida, Purdue University, Virginia Tech, and Columbia University, and at national centers including Fermilab and Lawrence Berkeley National Laboratory. The deployment phases ranged from initial site onboarding using site deployment guides developed with contributions from Oak Ridge National Laboratory staff to large-scale interoperability exercises with European Grid Infrastructure pilots and bilateral experiments with Keio University and University of Tokyo. Demonstrations occurred at gatherings such as Supercomputing Conference and were coordinated with scientific communities from the Human Genome Project era to high-energy physics collaborations like CMS experiment.
Researchers used the infrastructure to advance distributed data management, remote instrumentation control, and ensemble simulation workflows. Application domains included high-energy physics for projects like ATLAS experiment and CMS experiment, astrophysics collaborations linked to Sloan Digital Sky Survey, bioinformatics pipelines related to Human Genome Project and National Institutes of Health-funded work, and materials science simulations associated with DOE Office of Science programs. Tools developed on the testbed supported middleware research at institutions such as University of California, Berkeley and Princeton University and informed science gateways coordinated with TeraGrid and later XSEDE initiatives.
The project's outcomes influenced the creation and governance of federated infrastructures like the Open Science Grid and informed middleware evolution within the Globus Alliance and scheduling approaches used by Condor Project. Best practices from site operations fed into operational models at National Center for Supercomputing Applications and informed procurement and networking strategies for Internet2 members and National LambdaRail participants. Many of the software artifacts, deployment guides, and lessons contributed to successor programs such as Open Science Grid operations, the consolidation of distributed computing in collaborations like Large Hadron Collider, and university research computing organizations at institutions including University of California, Berkeley, University of Chicago, and Massachusetts Institute of Technology.