Generated by GPT-5-mini| MIPLIB | |
|---|---|
| Name | MIPLIB |
| Genre | Benchmark library |
| Developer | Mixed Integer Programming community |
| Initial release | 1992 |
| Latest release | 2017 |
| Platform | Cross-platform |
| License | Public domain / varied |
MIPLIB
MIPLIB is a publicly available benchmark library for mixed integer programming instances widely used by researchers and practitioners to evaluate optimization algorithms, solver performance, and computational strategies. It serves as a reference corpus connecting communities around IBM, Google, Microsoft, Amazon, Facebook, Apple Inc., Intel Corporation, NVIDIA, Oracle Corporation, Siemens, General Electric, Siemens AG, Schlumberger, Daimler AG, BMW, Toyota Motor Corporation, Ford Motor Company, Boeing, Airbus, Lockheed Martin, Raytheon Technologies, NASA, European Space Agency, CERN, National Institute of Standards and Technology, Lawrence Berkeley National Laboratory, Argonne National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, MIT, Stanford University, Princeton University, Harvard University, University of California, Berkeley, ETH Zurich, EPFL, University of Oxford, University of Cambridge, Imperial College London, Tokyo Institute of Technology, Tsinghua University, Peking University, University of Toronto, McGill University, University of Melbourne, University of Sydney, University of Edinburgh, Utrecht University, Delft University of Technology, Technical University of Munich, RWTH Aachen University, University of Bologna, Politecnico di Milano, KTH Royal Institute of Technology, Seoul National University, KAIST, Indian Institute of Technology Bombay, Indian Institute of Science, University of São Paulo, Universidade de São Paulo, University of British Columbia, University of Waterloo, Carnegie Mellon University, Cornell University, Yale University, Columbia University, Brown University, Duke University, University of Michigan, University of Illinois Urbana–Champaign, Georgia Institute of Technology, Rensselaer Polytechnic Institute, Lehigh University, University of Texas at Austin, University of California, Los Angeles, University of California, San Diego, National Taiwan University, Hong Kong University of Science and Technology, Scuola Normale Superiore di Pisa.
MIPLIB provides a curated set of mixed integer programming test cases that represent real-world and synthetic challenges for branch-and-bound, branch-and-cut, and cutting-plane techniques used in commercial and academic solvers such as CPLEX, Gurobi, GLPK, CBC (Coin-or branch and cut), SCIP, BARON, Couenne, Xpress-MP, MOSEK, FICO Xpress, LINDO, AMPL, GAMS, PuLP, JuMP, Pyomo, OR-Tools, COIN-OR, NEOS Server, ZIMPL, CVXOPT, CVXPY, Knitro, SNOPT, IPOPT, Bonmin, CP-SAT Solver.
MIPLIB originated as a community effort influenced by benchmarking traditions from DIMACS, SPEC, SAT Competition, COPS Benchmarking, UCI Machine Learning Repository, TSPLIB, Netlib, and projects at IBM Research and Bell Labs. Early contributors included researchers affiliated with Mathematical Programming Society, INFORMS, SIAM, EURO (Association of European Operational Research Societies), and national labs such as Oak Ridge National Laboratory and Los Alamos National Laboratory. Over successive releases researchers from ETH Zurich, Zuse Institute Berlin, Leuven University, TU Darmstadt, Barcelona Supercomputing Center, KTH, and industrial partners curated instances, classification criteria, and solution quality metrics. Major milestones parallel initiatives like the MIPLIB 2003 release, later community-organized updates culminating in comprehensive sets used in competitions and workshops at conferences such as International Conference on Integer Programming and Combinatorial Optimization, CPAIOR, IJCAI, NeurIPS, ICML, AAAI Conference on Artificial Intelligence, INFORMS Annual Meeting, SIAM Conference on Optimization, EURO-k Conference.
The library groups instances by hardness, origin, and modeling features with categories reflecting practical problems from vehicle routing problem, facility location problem, job shop scheduling problem, unit commitment problem, lot sizing problem, cutting stock problem, multicommodity flow problem, network design problem, knapsack problem, set covering problem, set partitioning problem, traveling salesman problem, bin packing problem, graph coloring problem, packing problems, scheduling problems, and synthetic generators inspired by work from Karp, Cook, Garey and Johnson, Papadimitriou and Steiglitz, Korte and Vygen, Nemhauser and Wolsey, Schrijver, Hochbaum, Edmonds, Karmarkar, Dantzig, Fulkerson, Hungarian algorithm origins. Instances are drawn from industry case studies at Shell, BP, ExxonMobil, Siemens Gamesa, ABB, Schneider Electric, Procter & Gamble, Unilever, Nestlé, Walmart', Tesco, IKEA, Maersk, DHL, FedEx, UPS, Deutsche Bahn, SNCF, Ryanair, Delta Air Lines, United Airlines, Marriott International, Hilton Worldwide, Pfizer, Johnson & Johnson, Roche, Novartis, Bayer AG.
MIPLIB influenced algorithmic advances credited in publications at venues like Mathematical Programming, Operations Research, Management Science, Journal of the ACM, SIAM Journal on Optimization, INFORMS Journal on Computing, Computers & Operations Research, Annals of Operations Research, European Journal of Operational Research. Solver enhancements—presolve routines, cutting-plane families (Gomory, clique, cover, flow cover), heuristic plugins, and parallel branch-and-bound—are often evaluated on MIPLIB instances by teams from Gurobi Optimization LLC, IBM Research, Zuse Institute Berlin, Konrad-Zuse-Zentrum für Informationstechnik Berlin, Forschungszentrum Jülich, CWI, Savoir-faire Linux, RMIT University, Monash University, and startup labs like Zebra Medical Vision and DeepMind research groups. Comparative studies cited at NeurIPS and ICML connect solver performance to machine learning approaches from groups at Google DeepMind, Facebook AI Research, and OpenAI.
Researchers use standardized metrics—time to best known solution, primal-dual gap, node count, memory footprint—following protocols similar to SPEC CPU, COCO Benchmarking, and competitions at DIMACS Implementation Challenge. Benchmarks are run on computing infrastructures ranging from clusters at Lawrence Livermore National Laboratory and Argonne Leadership Computing Facility to cloud platforms by Amazon Web Services, Google Cloud Platform, Microsoft Azure, and HPC centers at CERN and NERSC. Evaluation workflows integrate with modeling languages and solvers such as AMPL, GAMS, Pyomo, JuMP, CPLEX, Gurobi, SCIP, enabling reproducibility efforts encouraged by journals like Nature Communications, Science Advances, and community standards endorsed by INFORMS.
Key releases gathered challenging instances that revealed solver weaknesses and spurred improvements; notable progress was documented in proceedings of IPC (International Planning Competition), MIP Workshop sessions at IPCO, and reports by teams from Gurobi, IBM ILOG, Zuse Institute, ETH Zurich and TU Darmstadt. Benchmark-driven achievements include dramatic runtime reductions reported in papers at Mathematical Programming, solver-engineering prizes at INFORMS meetings, and best-paper awards at IPCO and CPAIOR. Results have guided commercial deployments in logistics at Amazon Logistics, scheduling at Airbus, energy optimization at EDF (Électricité de France), and telecommunication planning at Nokia, Ericsson, and Huawei Technologies.
Category:Benchmark datasets