LLMpediaThe first transparent, open encyclopedia generated by LLMs

Jovan D. Grogan

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 62 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted62
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Jovan D. Grogan
NameJovan D. Grogan
FieldsComputer science, artificial intelligence, machine learning
WorkplacesStanford University, Google AI, OpenAI
Alma materMassachusetts Institute of Technology, Carnegie Mellon University
Known forContributions to reinforcement learning, large language model alignment
AwardsNeurIPS Outstanding Paper Award, MIT Technology Review Innovators Under 35

Jovan D. Grogan is a prominent researcher and engineer in the fields of artificial intelligence and machine learning, recognized for his foundational work in reinforcement learning and the safety of advanced AI systems. His career has spanned leading academic institutions like Stanford University and influential industry labs including Google AI and OpenAI. Grogan's research focuses on developing robust methods for AI alignment and value learning, aiming to ensure that powerful machine learning systems act in accordance with human intentions and ethical principles.

Early life and education

Grogan demonstrated an early aptitude for mathematics and computer programming, which led him to pursue undergraduate studies at the Massachusetts Institute of Technology. At MIT, he majored in computer science and electrical engineering, conducting early research in the MIT Computer Science and Artificial Intelligence Laboratory. He subsequently earned a Doctor of Philosophy from the Robotics Institute at Carnegie Mellon University, where his doctoral dissertation under advisor Manuela Veloso pioneered novel approaches to multi-agent reinforcement learning. His graduate work was supported by a fellowship from the National Science Foundation.

Career

Following his doctorate, Grogan joined Stanford University as a postdoctoral researcher in the Stanford Artificial Intelligence Laboratory, collaborating with figures like Dorsa Sadigh on human-robot interaction. He then transitioned to industry, accepting a position as a research scientist at Google AI in Mountain View, California, where he worked on the Google Brain team. His tenure at Google involved significant projects related to deep reinforcement learning and its applications. Grogan later moved to OpenAI, contributing to their alignment research division and efforts on constitutional AI. He has also served as a program committee member for major conferences like the International Conference on Machine Learning and AAAI Conference on Artificial Intelligence.

Research and contributions

Grogan's primary research contributions lie at the intersection of reinforcement learning, AI safety, and algorithmic robustness. He is widely cited for developing a influential framework for inverse reinforcement learning that improves an agent's ability to infer human preferences from observed behavior, a critical component for value alignment. His work on reward modeling and corrigibility in large language models has provided practical techniques for reducing harmful outputs and goal misgeneralization. He has also published extensively on cooperative AI and mechanisms for scalable oversight, presenting key papers at venues such as NeurIPS and the Journal of Artificial Intelligence Research.

Awards and honors

For his impactful research, Grogan has received several prestigious recognitions within the AI research community. He was a recipient of the NeurIPS Outstanding Paper Award for his work on offline reinforcement learning. He was also named to the MIT Technology Review Innovators Under 35 list in the Pioneers category. His doctoral research earned him the best student paper award at the International Conference on Autonomous Agents and Multiagent Systems. Furthermore, his work has been supported by grants from institutions like the Future of Life Institute and the Center for Human-Compatible AI.

Personal life

Grogan maintains a private personal life but is known to be an advocate for the ethical development of artificial intelligence, having participated in workshops organized by the Partnership on AI. He has expressed views on the importance of international cooperation in AI governance, contributing to discussions at forums like the AI for Good Global Summit. In his limited public commentary, he emphasizes the long-term challenges of AI alignment as described by researchers like Stuart Russell and the Machine Intelligence Research Institute.

Category:American computer scientists Category:Artificial intelligence researchers Category:Machine learning researchers