LLMpediaThe first transparent, open encyclopedia generated by LLMs

Michael J. Johnston

Generated by DeepSeek V3.2
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 55 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted55
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Michael J. Johnston
NameMichael J. Johnston
NationalityAmerican
FieldsComputer science, artificial intelligence, machine learning
WorkplacesStanford University, Google Research, OpenAI
Alma materMassachusetts Institute of Technology, Carnegie Mellon University
Known forReinforcement learning, large language models, AI safety
AwardsNeurIPS Best Paper Award, MIT Technology Review Innovators Under 35

Michael J. Johnston is an American computer scientist and researcher specializing in artificial intelligence and machine learning. His work has significantly advanced the fields of reinforcement learning and the development of large language models, with a concurrent focus on AI safety and AI alignment. Johnston has held prominent research positions at leading institutions including Stanford University, Google Research, and OpenAI.

Early life and education

Johnston was born in the United States and demonstrated an early aptitude for mathematics and computer programming. He pursued his undergraduate studies at the Massachusetts Institute of Technology, earning a Bachelor of Science in Computer Science and Engineering. He subsequently completed his Doctor of Philosophy in Computer Science at Carnegie Mellon University, where his doctoral dissertation focused on novel algorithms for multi-agent reinforcement learning under the supervision of renowned figures in the field.

Career

Following his PhD, Johnston conducted postdoctoral research in the Stanford Artificial Intelligence Laboratory at Stanford University. He then joined Google Research as a research scientist, contributing to projects within Google Brain and DeepMind. His work there involved scaling deep reinforcement learning techniques for complex environments. Johnston later transitioned to OpenAI, where he played a key role in the research and development of successive generations of GPT (generative pre-trained transformer) models. He has also served as an advisor to several AI startups and governmental committees on technology policy.

Research and contributions

Johnston's research has centered on making AI systems more capable, efficient, and aligned with human intent. His early contributions include improving sample efficiency in reinforcement learning through inverse reinforcement learning and hierarchical reinforcement learning methods. He is widely cited for his work on reward modeling and scalable oversight techniques, which are critical for training advanced AI using human feedback, a cornerstone of modern LLM (large language model) development. His publications in venues like NeurIPS, ICML, and the Journal of Machine Learning Research have explored constitutional AI, red teaming of language models, and AI governance.

Awards and honors

For his influential research, Johnston has received several prestigious recognitions. He was a co-recipient of the NeurIPS Best Paper Award for work on offline reinforcement learning. He was also named to the MIT Technology Review Innovators Under 35 list in the Pioneers category. His work has been supported by grants from the National Science Foundation and the Defense Advanced Research Projects Agency.

Personal life

Johnston maintains a private personal life. He is known to be an advocate for effective altruism and has supported initiatives related to global catastrophic risk reduction. In his spare time, he enjoys mountaineering and classical piano.

Category:American computer scientists Category:Artificial intelligence researchers Category:Machine learning researchers Category:Living people