LLMpediaThe first transparent, open encyclopedia generated by LLMs

one-way function

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Oded Goldreich Hop 5
Expansion Funnel Raw 69 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted69
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
one-way function
NameOne-way function
FieldCryptography, Theoretical Computer Science
Introduced1970s
NotablePaul Cohen, Michael Rabin, Whitfield Diffie, Martin Hellman, Ronald Rivest

one-way function A one-way function is a mathematical mapping that is easy to compute but believed to be hard to invert. It underlies modern public-key and symmetric cryptography, and connects to foundational problems in Turing machine complexity, the P versus NP problem, and average-case hardness studied by researchers at institutions such as Massachusetts Institute of Technology, Princeton University, and Stanford University. Constructions and candidate families have been proposed by figures associated with RSA (cryptosystem), Diffie–Hellman key exchange, and lattice-based proposals from groups at IBM and the National Institute of Standards and Technology.

Definition and Formal Properties

A one-way function is formally defined in the setting of probabilistic polynomial-time algorithms on a model such as a Turing machine or a uniform family of Boolean circuits. The definition requires that for every input sampled from a specified distribution, the forward map can be computed by an efficient deterministic or randomized algorithm while any efficient algorithm for inversion—modeled as a probabilistic polynomial-time Turing machine or nonuniform Boolean circuit family—succeeds with only negligible probability. This property is often expressed using notions from complexity theory including P, NP, and BPP. Security reductions relate inversion success to breaking underlying hardness assumptions associated with problems studied at Institute for Advanced Study groups and departments such as University of California, Berkeley and Carnegie Mellon University.

Computational Hardness and Security Assumptions

Cryptographic hardness assumptions used to justify one-wayness include integer factorization assumptions formalized in work related to RSA (cryptosystem), discrete logarithm assumptions exemplified by Diffie–Hellman key exchange and groups used in Elliptic curve cryptography research from Certicom collaborations, and lattice assumptions such as the Shortest Vector Problem and Learning with Errors problem promoted by teams at Microsoft Research and École normale supérieure. Other assumptions draw on coding-theoretic problems like Syndrome decoding studied at Bell Labs, multivariate quadratic equations researched by groups including Darmstadt University of Technology, and subgroup attacks analyzed in cryptanalysis by staff at National Security Agency. Complexity-theoretic frameworks connect these assumptions to average-case complexity results from scholars at Columbia University and hardness amplification techniques developed in work associated with Princeton University.

Examples and Candidate Constructions

Classical candidate families include functions based on RSA (cryptosystem) modulus exponentiation, modular squaring as in proposals related to Goldwasser–Micali cryptosystem, discrete logarithm maps in multiplicative groups of finite fields and Elliptic curve cryptography curves studied by teams at University of Waterloo and University of Virginia, and lattice-based trapdoor functions derived from Learning with Errors and the NTRU family developed by researchers at Brown University and New York University. Code-based candidates draw on constructions from McEliece cryptosystem research at California Institute of Technology and multivariate candidates arise in work from University of Bordeaux and University of Padua. Hash-based one-way constructions are built from iterated compression functions used in standards committees such as Internet Engineering Task Force and organizations including ISO. Quantum considerations reference algorithms by Peter Shor at AT&T Bell Laboratories and subsequent quantum-resistant proposals advanced at University of Toronto and University of Waterloo groups.

Applications in Cryptography

One-way functions serve as primitives for many cryptographic tasks: they enable one-way permutations and trapdoor permutations used in RSA (cryptosystem) and trapdoor constructions studied at Princeton University; they underpin pseudorandom generator constructions from works by scholars at MIT, and one-way hashing central to protocols standardized by Internet Engineering Task Force and implemented by companies such as Google and Apple. Commitment schemes, digital signature schemes developed in frameworks at Stanford University and ETH Zurich, and zero-knowledge protocols studied in collaborations involving IBM and Microsoft Research rely on one-wayness. They also form the basis for password hashing schemes used by institutions like OpenBSD and authentication mechanisms in protocols standardized by the Internet Engineering Task Force.

Existence Results and Complexity-Theoretic Implications

Proving the existence of one-way functions is equivalent to separating certain complexity classes; seminal results show that if one-way functions exist then PNP relative to oracles studied by researchers at Cornell University and Rutgers University. Conversely, the nonexistence of one-way functions would collapse numerous cryptographic constructions and impact complexity-theoretic hierarchies analyzed in work from University of Chicago. Relativized separations and oracle constructions by scholars such as those affiliated with Harvard University illustrate limits of proof techniques. Average-case complexity frameworks from University of British Columbia and hardness versus randomness tradeoffs developed at University of California, San Diego elucidate consequences for pseudorandomness and derandomization.

Practical Considerations and Implementation Issues

Implementers consider parameter choices based on advances in algorithms from teams at SRI International and CWI, side-channel resistance studied by groups at University of Cambridge and KU Leuven, and hardware acceleration efforts in collaborations with Intel and NVIDIA. Standardization bodies like National Institute of Standards and Technology and Internet Engineering Task Force publish guidance informed by cryptanalysis from institutions including École Polytechnique Fédérale de Lausanne and University College London. Post-quantum migration planning by consortia involving European Commission research programs and industry partners addresses candidate functions resilient to quantum algorithms from IBM Research and Google Quantum AI. Deployment must also manage protocol interactions exemplified by protocols developed at IETF working groups and ecosystem choices made by companies such as Microsoft and Amazon Web Services.

Category:Cryptography