LLMpediaThe first transparent, open encyclopedia generated by LLMs

Sergey Levine

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: NIPS Hop 4
Expansion Funnel Raw 40 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted40
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Sergey Levine
Sergey Levine
Sequoia Capital · CC BY 4.0 · source
NameSergey Levine
FieldsMachine learning, Robotics, Artificial intelligence
WorkplacesUniversity of California, Berkeley, Google
Alma materUniversity of California, Berkeley, Stanford University
Known forDeep reinforcement learning, Robotic control, Imitation learning

Sergey Levine is an American researcher known for work in machine learning, robotics, and artificial intelligence, with influential contributions to deep reinforcement learning, policy optimization, and robotic manipulation. He is a professor and principal investigator who has led research at major institutions and collaborated with researchers across academic and industrial laboratories. Levine's work bridges theory and application, impacting autonomous systems, learning-based control, and robot learning benchmarks.

Early life and education

Levine completed undergraduate and graduate studies at institutions including University of California, Berkeley and Stanford University, where he studied under advisors and collaborators from laboratories linked to Berkeley Artificial Intelligence Research Laboratory and Stanford Artificial Intelligence Laboratory. During his doctoral and postdoctoral training he worked alongside scholars connected to MIT Computer Science and Artificial Intelligence Laboratory, Carnegie Mellon University, and research groups at Google DeepMind. His early exposure included interactions with faculty from University of Toronto and visiting researchers associated with the Toyota Research Institute and OpenAI.

Research and career

Levine joined the faculty of University of California, Berkeley and established a laboratory that interfaces with departments and centers such as the Electrical Engineering and Computer Sciences Department, the Berkeley AI Research (BAIR) Lab, and collaborations with the Robotics Institute and industrial partners like Google and NVIDIA. He has led projects funded by agencies and organizations including National Science Foundation, DARPA, and corporate research programs at Amazon, with joint work involving teams at DeepMind and OpenAI. Levine's lab has collaborated with researchers from MIT, Stanford University, Carnegie Mellon University, and international institutions such as ETH Zurich and University of Oxford.

Contributions to reinforcement learning and robotics

Levine developed algorithms in deep reinforcement learning that integrate ideas from supervised learning, optimal control, and probabilistic inference, influencing work at groups like DeepMind and laboratories at Google Research. His contributions include methods for end-to-end training of visuomotor policies, model-based and model-free hybrid approaches, and scalable policy gradient and actor-critic techniques used by teams at Facebook AI Research and industry labs. He published work on guided policy search, stochastic value gradients, and scalable off-policy learning that informed benchmarks and datasets used by researchers at Berkeley AI Research (BAIR) Lab, OpenAI, and the Robotics Institute. Levine's research advanced robotic manipulation, enabling dexterous behaviors evaluated in testbeds associated with Toyota Research Institute, NASA Jet Propulsion Laboratory, and academic platforms popularized by PR2 and UR5 robots. His approaches influenced subsequent studies on imitation learning and inverse reinforcement learning pursued at University of Toronto, ETH Zurich, and Carnegie Mellon University.

Awards and honors

Levine has received recognition from professional societies and institutions including distinctions from Association for Computing Machinery, awards linked to the National Science Foundation and fellowships associated with collaborations involving Stanford University and Berkeley. His papers have been presented at premier conferences such as NeurIPS, ICML, ICRA, and RSS, and have been cited in award-winning projects and invited talks at venues including AAAI and CVPR. He has served on program committees and editorial boards connected to Journal of Machine Learning Research and major conferences attended by members of DeepMind and OpenAI.

Selected publications

- Levine, S.; et al., "End-to-end training of deep visuomotor policies", ICRA, work cited by research at Berkeley AI Research (BAIR) Lab and Google DeepMind. - Levine, S.; et al., "Guided Policy Search", NeurIPS, influential for teams at Stanford University and Carnegie Mellon University. - Levine, S.; et al., "Stochastic Value Gradients", ICML, referenced in studies from University of Toronto and ETH Zurich. - Levine, S.; et al., "Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection", RSS, connected to projects at Toyota Research Institute and NVIDIA. - Levine, S.; et al., "Continuous control with deep reinforcement learning" (collaborative works and follow-ups), AAAI and ICLR-related venues.

Category:American computer scientists Category:Roboticists Category:Machine learning researchers