LLMpediaThe first transparent, open encyclopedia generated by LLMs

MIPLIB 2010

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Gurobi Hop 5
Expansion Funnel Raw 3 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted3
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
MIPLIB 2010
NameMIPLIB 2010
DisciplineMathematical optimization
Established2010
PredecessorMIPLIB 2003
SuccessorMIPLIB 2017
PublisherGesellschaft für Informatik
Main organizerZuse Institute Berlin

MIPLIB 2010

MIPLIB 2010 is a benchmark library and test collection for mixed integer programming problems developed to support research by institutions such as the Zuse Institute Berlin, INFORMS, IBM, Google, and École Polytechnique. It serves algorithm developers at Princeton, ETH Zurich, University of California Berkeley, University of Cambridge, and TU Darmstadt, and is used by practitioners at Siemens, Boeing, Microsoft, and SAP for evaluating solvers like CPLEX, Gurobi, SCIP, and CBC.

Overview

The project was produced by a collaboration including the Zuse Institute Berlin, Hans Mittelmann associates, École Polytechnique researchers, and contributors from IBM Research, Google Research, and the Vienna University of Technology. Its scope was informed by earlier efforts such as MIPLIB 2003, and by benchmarking traditions from the DIMACS challenges, the SAT Competition, and the COIN-OR community. The release targeted users familiar with solvers from IBM, Google, FICO, and academia at ETH Zurich, Princeton University, and Carnegie Mellon University, providing a curated suite reflecting instances encountered at Boeing, Siemens, Microsoft Research, and Hewlett-Packard Laboratories.

Problem Instances

The library collects diverse problem instances drawn from industrial partners including Boeing, Siemens, IBM, Microsoft, and General Electric, and from academic groups at INRIA, École Polytechnique, TU Darmstadt, and Carnegie Mellon University. Instances represent formulations encountered in logistics at UPS, routing at Deutsche Bahn, energy systems modeling at National Renewable Energy Laboratory, scheduling at Boeing, and telecommunications at Nokia. Benchmarks include mixed integer linear programs contributed by banks like Goldman Sachs for portfolio optimization, by oil companies such as ExxonMobil for blending, and by public agencies such as NASA for mission planning, with instance provenance tracing to research groups at Stanford, MIT, and University of Toronto.

Benchmarking and Performance Metrics

Evaluation protocols mirror practices from the SAT Competition, DIMACS benchmarks, and the OR-Tools user community, employing metrics used by IBM Research, Gurobi Optimization, and FICO. Performance metrics include time-to-best-bound as used by CPLEX teams at IBM, primal-dual gap monitoring employed at Google Research, node counts similar to those reported by SCIP authors at TU Vienna, and memory footprints tracked by researchers at ETH Zurich. Results are compared across platforms including Microsoft Azure, Amazon Web Services, and high-performance clusters at Lawrence Berkeley National Laboratory, while reproducibility is promoted following recommendations from the Association for Computing Machinery and IEEE.

Submission and Evaluation Process

Submissions were collected from solver developers at Gurobi, IBM, Zuse Institute collaborators, and independent researchers at INRIA and University of Edinburgh, with evaluation coordinated by Zuse Institute Berlin and contributors from Hans Mittelmann's benchmarking group. Evaluation harnessed computing resources at the National Energy Research Scientific Computing Center, Lawrence Livermore National Laboratory, and university clusters at Princeton and Cambridge, using standardized input/output formats practiced by COIN-OR, the SAT community, and the DIMACS initiative. Contributors followed policies inspired by the European Grid Infrastructure and guidelines from the Organization for Economic Co-operation and Development for data sharing.

Impact and Legacy

The collection influenced solver development at Gurobi, IBM, and the SCIP team, and guided academic research at MIT, Stanford, ETH Zurich, and the University of Cambridge, informing algorithmic advances showcased at conferences such as IPCO, IPCO, and SIAM Conference on Optimization. It shaped curricula at Carnegie Mellon University and University of Toronto, and underpinned industrial benchmarking at Siemens, Boeing, and Microsoft Research, while inspiring successor collections and initiatives supported by the European Commission, the German Research Foundation, and national labs like Sandia and Argonne. The legacy persists in modern solver evaluation practices at INFORMS meetings, in toolchains used by Google, and in datasets archived by the Zuse Institute Berlin and partners such as the COIN-OR Foundation.

Category:Benchmarking collections Category:Mathematical optimization Category:Zuse Institute Berlin