Generated by DeepSeek V3.2| Michael J. Johnston | |
|---|---|
| Name | Michael J. Johnston |
| Nationality | American |
| Fields | Computer science, artificial intelligence, machine learning |
| Workplaces | Stanford University, Google Research, OpenAI |
| Alma mater | Massachusetts Institute of Technology, Carnegie Mellon University |
| Known for | Reinforcement learning, large language models, AI safety |
| Awards | NeurIPS Best Paper Award, MIT Technology Review Innovators Under 35 |
Michael J. Johnston is an American computer scientist and researcher specializing in artificial intelligence and machine learning. His work has significantly advanced the fields of reinforcement learning and the development of large language models, with a concurrent focus on AI safety and AI alignment. Johnston has held prominent research positions at leading institutions including Stanford University, Google Research, and OpenAI.
Johnston was born in the United States and demonstrated an early aptitude for mathematics and computer programming. He pursued his undergraduate studies at the Massachusetts Institute of Technology, earning a Bachelor of Science in Computer Science and Engineering. He subsequently completed his Doctor of Philosophy in Computer Science at Carnegie Mellon University, where his doctoral dissertation focused on novel algorithms for multi-agent reinforcement learning under the supervision of renowned figures in the field.
Following his PhD, Johnston conducted postdoctoral research in the Stanford Artificial Intelligence Laboratory at Stanford University. He then joined Google Research as a research scientist, contributing to projects within Google Brain and DeepMind. His work there involved scaling deep reinforcement learning techniques for complex environments. Johnston later transitioned to OpenAI, where he played a key role in the research and development of successive generations of GPT (generative pre-trained transformer) models. He has also served as an advisor to several AI startups and governmental committees on technology policy.
Johnston's research has centered on making AI systems more capable, efficient, and aligned with human intent. His early contributions include improving sample efficiency in reinforcement learning through inverse reinforcement learning and hierarchical reinforcement learning methods. He is widely cited for his work on reward modeling and scalable oversight techniques, which are critical for training advanced AI using human feedback, a cornerstone of modern LLM (large language model) development. His publications in venues like NeurIPS, ICML, and the Journal of Machine Learning Research have explored constitutional AI, red teaming of language models, and AI governance.
For his influential research, Johnston has received several prestigious recognitions. He was a co-recipient of the NeurIPS Best Paper Award for work on offline reinforcement learning. He was also named to the MIT Technology Review Innovators Under 35 list in the Pioneers category. His work has been supported by grants from the National Science Foundation and the Defense Advanced Research Projects Agency.
Johnston maintains a private personal life. He is known to be an advocate for effective altruism and has supported initiatives related to global catastrophic risk reduction. In his spare time, he enjoys mountaineering and classical piano.
Category:American computer scientists Category:Artificial intelligence researchers Category:Machine learning researchers Category:Living people