Generated by DeepSeek V3.2linear programming is a foundational method in mathematical optimization for achieving the best outcome, such as maximum profit or lowest cost, within a mathematical model whose requirements are represented by linear relationships. It is a special case of mathematical programming and a key technique in operations research. The development of efficient algorithms, most notably the simplex method, has made it a vital tool for decision-making in business, economics, and engineering.
The core objective is to optimize a linear objective function, subject to a set of linear equality or inequality constraints. The set of feasible solutions forms a convex polytope, which can be explored to find an optimal vertex. This framework is central to fields like supply chain management, resource allocation, and network flow problems. Its theoretical foundations are deeply connected to convex analysis and polyhedral combinatorics.
A standard formulation involves a vector of decision variables, an objective function to maximize or minimize, and a system of constraints. The canonical form maximizes **c**ᵀ**x** subject to *A***x** ≤ **b** and **x** ≥ **0**, where *A* is a matrix, and **b** and **c** are vectors. This structure is studied extensively in matrix theory and linear algebra. The fundamental theorem states that if an optimal solution exists, it occurs at a vertex of the feasible region, a concept linked to the extreme point theory of polytopes.
The most historically significant algorithm is the simplex method, developed by George Dantzig, which traverses adjacent vertices of the polytope to find an optimum. While often efficient in practice, its worst-case performance led to the development of polynomial-time interior-point methods, such as those pioneered by Narendra Karmarkar. For problems with special structure, methods like the decomposition principle or the Hungarian algorithm for the assignment problem are employed. Software implementations are found in systems like CPLEX and the GNU Linear Programming Kit.
Every problem, called the primal, has an associated dual problem, a relationship formalized in the dual linear program. The weak duality theorem guarantees the dual objective value bounds the primal, while the strong duality theorem states their optimal values coincide if a solution exists. This theory, developed by mathematicians like John von Neumann and Leonid Kantorovich, is foundational to economic equilibrium models and provides the basis for sensitivity analysis. The concept of complementary slackness gives optimality conditions linking primal and dual solutions.
Applications are vast and critical to modern industry and planning. In business, it is used for portfolio optimization in finance and blending problems in the petroleum industry. Major technology companies like Google and Amazon use it for data center workload distribution and logistics. Governmental and military applications include scheduling for the United States Air Force and crop rotation planning in agriculture. It also underpins algorithms for machine learning models like support-vector machines.
The origins lie in the work of Leonid Kantorovich in the Soviet Union during the 1930s on economic planning, and independently by Tjalling Koopmans on transportation problems. The field was revolutionized in 1947 by George Dantzig at the RAND Corporation, who formulated the general problem and invented the simplex method. Key theoretical advances followed, including the duality theorems by John von Neumann and the ellipsoid algorithm by Leonid Khachiyan. The 1984 introduction of Karmarkar's algorithm by Narendra Karmarkar at Bell Labs spurred major developments in interior-point methods.