Generated by GPT-5-mini| Havel–Hakimi algorithm | |
|---|---|
| Name | Havel–Hakimi algorithm |
| Inventor | Hugo Havelka; Yury Hakimi |
| Year | 1950s–1960s |
| Area | Graph theory |
| Input | Degree sequence |
| Output | Graphicity decision |
Havel–Hakimi algorithm is a constructive procedure for determining whether a finite nonincreasing sequence of nonnegative integers is realizable as the degree sequence of a simple undirected graph. Developed independently in the mid‑20th century, the method reduces a candidate degree sequence by repeatedly connecting the highest‑degree vertex to the next highest degrees and testing for feasibility; it is a central tool in combinatorics and discrete mathematics used alongside classical results such as the Erdős–Gallai theorem and the Gale–Ryser criteria. The algorithm has influenced research in network science, computer science, and combinatorial design, and it features in literature related to extremal graph theory, matching theory, and graphical enumeration.
The algorithm is named after Hugo Havelka and Yury Hakimi and is associated with work in graph theory, extremal combinatorics, and algorithmic graph realization procedures developed in the context of postwar European and North American research communities. It addresses the classical problem posed by Paul Erdős and Tibor Gallai in characterizing degree sequences, and it is closely related to constructive and existence results found in texts by Richard Stanley, Béla Bollobás, and László Lovász. Applications and connections extend to scholars and institutions including the American Mathematical Society, the London Mathematical Society, and conferences such as the International Congress of Mathematicians where graph realization problems have been prominent.
Given a finite nonincreasing sequence d = (d1, d2, ..., dn) of nonnegative integers, the method proceeds iteratively: remove the first element d1, subtract 1 from each of the next d1 elements, then reorder the resulting list into nonincreasing order and repeat until either a zero sequence is obtained or a negative entry appears. The iterative reduction mirrors constructive steps used by Paul Erdős, Andrew Gleason, and other contributors to graphical sequence theory and shares methodology with greedy algorithms popularized in algorithmic literature from institutions like MIT, Stanford, and INRIA. Implementations and pedagogical expositions often reference algorithmic paradigms from Donald Knuth, Robert Sedgewick, and Jon Bentley to analyze data structures and reordering costs, and they compare the procedure to matching algorithms such as those of Jack Edmonds and to degree‑preserving rewiring techniques studied by network researchers at Princeton, Harvard, and UC Berkeley.
Correctness proofs employ inductive and combinatorial arguments similar to those used by Erdős and Gallai, constructing an explicit simple graph realization when the reductions never produce a negative value and showing impossibility otherwise. Standard proofs appeal to invariants and exchange arguments that parallel techniques used in proofs by Paul Erdős, Tibor Gallai, and Vera T. Sós, and they relate to classical theorems by W. Tutte and Claude Berge on matchings and factors. Alternative proofs connect to matrix completion results associated with Richard Brualdi and Herbert Ryser, and to constructive algorithms discussed in textbooks by László Lovász, Béla Bollobás, and Miklós Bóna.
Naïve implementations require repeated sorting and element adjustments, yielding worst‑case time complexity on the order of O(n^2 log n) or O(n^2) depending on data structures, while optimized approaches using priority queues or bucket sorting reduce overhead; analyses draw on algorithmic frameworks from Edsger Dijkstra, Robert Tarjan, and Michael Fredman. Practical performance considerations are discussed in computational treatments by ACM authors, SIAM publications, and in works originating from research groups at Carnegie Mellon, ETH Zurich, and UC San Diego that study degree sequence generation and graph enumeration. Space complexity is typically O(n), and empirical performance is evaluated in experimental studies from the network science community at Santa Fe Institute, Los Alamos National Laboratory, and Imperial College London.
Several extensions adapt the reduction idea to directed degree sequences, bipartite realizations, and multigraph contexts: the Gale–Ryser theorem and Fulkerson–Chen–Anstee criteria govern bipartite and directed versions, while multigraph adaptations relate to work by W. F. Gale, Richard Brualdi, and Yury Ryser. Other variants incorporate constraints such as forbidden edges, degree bounds, or vertex labels, connecting to constraint satisfaction research at IBM Research, Microsoft Research, and Google Research. Extensions also include random sampling of graphical sequences via Markov chain Monte Carlo methods studied by Persi Diaconis and Susan Holmes, and degree‑preserving rewiring algorithms examined by Duncan Watts and Mark Newman in complex networks research.
The procedure is widely used to construct explicit realizations in extremal graph problems, to generate test instances in algorithmics, and to certify feasibility in network design, with impact on studies by Paul Erdős, László Lovász, Béla Bollobás, and Mark Jerrum. It appears in treatments of graphic sequences in textbooks by Ronald Graham, Richard Stanley, and Miklós Bóna, and informs research on graph enumeration pursued by Noga Alon and Joel Spencer as well as on structural graph theory explored by Neil Robertson and Paul Seymour. Practical uses include modeling degree constraints in social network analyses at Columbia University and Stanford University, degree‑sequence problems in bioinformatics at EMBL and the Broad Institute, and in combinatorial optimization courses at ETH Zurich and the University of Cambridge.
Category:Graph theory algorithms