Generated by GPT-5-mini| Knapsack problem | |
|---|---|
| Name | Knapsack problem |
| Input | finite set of items with values and weights, capacity constraint |
| Output | subset of items maximizing total value without exceeding capacity |
| Complexity | NP-hard (optimization), NP-complete (decision) |
Knapsack problem
The Knapsack problem is a classical combinatorial optimization problem studied in theoretical computer science, discrete mathematics, and operations research. It models selection under resource constraints where a finite set of items with associated benefits and costs must be chosen to maximize total benefit subject to capacity limits; it connects to foundational work in Alan Turing-era computability, results from Richard Karp and Stephen Cook, and algorithmic paradigms used at institutions such as Bell Labs, MIT, and IBM. Major developments relate to complexity theory explored at conferences like STOC, FOCS, and ICALP and to applied deployments in companies including Google, Amazon (company), and Microsoft.
In its standard 0/1 form the problem presents n items each with a positive integer weight and value and a single capacity bound; the task is to choose a subset so that the sum of weights does not exceed the capacity and the sum of values is maximized. Formalizations and reductions frequently reference seminal theorems by Karp's 21 NP-Complete Problems, papers by Jack Edmonds and Richard Karp, and textbook treatments from Donald Knuth and Michael Garey. Variants and formal decision forms are often used in complexity proofs by researchers at institutions like Princeton University, Stanford University, and University of California, Berkeley.
Common variants include the 0/1 variant, the bounded and unbounded (complete) variants, the fractional variant, multi-dimensional or multi-constraint versions, and the multiple knapsack and bin-packing relatives studied in literature from IBM Research, AT&T Bell Labs, and universities such as University of Cambridge and ETH Zurich. Other named variants include the subset-sum special case related to work by Edsger Dijkstra and the partition problem investigated by scholars at University of Waterloo and Carnegie Mellon University.
Decision versions are NP-complete classically proven in reductions akin to those in Garey and Johnson; optimization is NP-hard. Complexity classifications reference landmark contributions from Stephen Cook and Leonid Levin and later refinements by Christos Papadimitriou and Richard Karp. Pseudo-polynomial time solvability for integral weights appears in dynamic programming results by researchers at Bell Labs and in algorithmic surveys at SIAM proceedings; hardness under polynomial-time reductions ties into completeness results in texts from Cambridge University Press and Springer Verlag.
Exact algorithms include dynamic programming, branch-and-bound, meet-in-the-middle, and integer linear programming formulations solved by solvers from IBM ILOG, Gurobi, and COIN-OR. Approximation algorithms and heuristics include greedy strategies, genetic algorithms researched at University of Illinois at Urbana–Champaign, simulated annealing projects at Los Alamos National Laboratory, and tabu search studies associated with NASA scheduling problems. Algorithm engineering and empirical evaluations are reported in venues like Journal of the ACM, SIAM Journal on Computing, and conferences such as SODA.
Instances and formulations arise in resource allocation for logistics firms like FedEx and United Parcel Service, portfolio selection in finance institutions including Goldman Sachs and JPMorgan Chase, cargo loading for aerospace companies such as Boeing and SpaceX, and scheduling in telecommunications projects at AT&T and Verizon Communications. Bioinformatics applications appear in studies at Broad Institute and National Institutes of Health sequencing pipelines; cryptographic constructions and knapsack-based schemes were historically explored by researchers linked to RSA Security and cryptanalysis groups.
The fractional variant admits a polynomial-time greedy solution; for the 0/1 and bounded variants there exist fully polynomial-time approximation schemes (FPTAS) and polynomial-time approximation schemes (PTAS) developed in work associated with Éva Tardos, Vijay Vazirani, and others. These approximation frameworks are central to algorithmic research presented at ICALP and ESA, and implemented within optimization libraries produced by organizations including Google Research and Microsoft Research.
Origins trace to early resource allocation problems in commerce and military logistics documented in archives from British Museum collections and studies by economists at University of Oxford; formal computational treatment crystallized with complexity theory advances at University of Toronto and Princeton University. Notation conventions (weights w_i, values v_i, capacity W) are standard in textbooks by Cormen, Leiserson, Rivest, and Stein and monographs published by Springer. Subsequent theoretical and applied research has been carried out across a global network of universities and industry labs including ETH Zurich, Tsinghua University, Seoul National University, and IBM Research.
Category:Algorithms