Generated by GPT-5-mini| Google Research New York | |
|---|---|
| Name | Google Research New York |
| Formation | 2010s |
| Headquarters | New York City |
| Parent organization | |
| Type | Research group |
Google Research New York is a research center within the technology company Google, located in New York City. The center conducts research in machine learning, natural language processing, computer vision, human-computer interaction, and systems, collaborating with universities, industry partners, and non-profit organizations. It has contributed to foundational models, open-source tools, and applied research impacting products and academic communities.
The New York lab traces its origins to Google’s expansion of research operations alongside groups such as Google Research, DeepMind, Brain Team, Google AI, and the company’s engagement with regional hubs like Silicon Valley, Cambridge (UK), Zurich, Bangalore, and Montreal. Early staff included researchers with prior affiliations to Columbia University, New York University, Princeton University, Cornell University, Harvard University, Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and Carnegie Mellon University. Leadership connections tied to figures who previously worked at Bell Labs, AT&T Labs, IBM Research, Microsoft Research, Facebook AI Research, and OpenAI. The lab’s growth paralleled industry milestones like advances from ImageNet, AlexNet, Transformer (machine learning), BERT, GPT-2, ResNet, and initiatives such as TensorFlow, PyTorch, Keras, and XLA.
Research lines have spanned natural language processing, computer vision, speech recognition, recommendation systems, information retrieval, privacy-preserving machine learning, and robustness (machine learning). Projects referenced by collaborators and publications intersect with work at Facebook AI Research, Microsoft Research Cambridge, DeepMind London, OpenAI, Allen Institute for AI, IBM Research Watson, Amazon Research, and NVIDIA Research. Specific efforts related to model architectures drew on concepts from Transformer (machine learning), Attention (machine learning), Sequence-to-sequence learning, and benchmarks such as GLUE, SQuAD, ImageNet Large Scale Visual Recognition Challenge, and COCO (dataset). Applied projects targeted product integration with teams akin to Google Search, YouTube, Google Translate, Gmail, Google Photos, and Google Assistant. Interdisciplinary work invoked partnerships with institutions like NewYork–Presbyterian Hospital, Mount Sinai Health System, Brooklyn Technical High School, and museums such as the Metropolitan Museum of Art.
Physical and computing facilities aligned with research centers in cities such as New York City, Manhattan, Brooklyn, and campus groups near Columbia University and New York University. Hardware stacks referenced GPUs and TPUs from NVIDIA, Google TPU, and specialized systems akin to those used at Argonne National Laboratory, Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory, and university clusters at Princeton University. Laboratory design mirrored spaces at Microsoft Research Redmond, Facebook Building 20, and academic labs at MIT Computer Science and Artificial Intelligence Laboratory and Stanford Artificial Intelligence Laboratory. Data center practices referenced standards developed by entities like Uptime Institute and collaborations resembling those with Energy Department (United States) research centers.
The lab engaged in collaborations with universities including Columbia University, New York University, Princeton University, Cornell Tech, Yale University, Rutgers University, CUNY Graduate Center, Syracuse University, Rensselaer Polytechnic Institute, and Brown University. Industry partnerships included projects with Apple Inc., Microsoft, Amazon (company), IBM, Meta Platforms, NVIDIA Corporation, Intel Corporation, Qualcomm, and startups from Silicon Alley. Non-profit and standards collaborations involved groups like OpenAI, Allen Institute for AI, The Partnership on AI, Electronic Frontier Foundation, Creative Commons, and arts institutions such as the Whitney Museum of American Art. Funding and joint programs connected with agencies and foundations like National Science Foundation, National Institutes of Health, Simons Foundation, John S. and James L. Knight Foundation, and Alfred P. Sloan Foundation.
The center supported student programs and exchanges with departments such as Columbia University Department of Computer Science, NYU Courant Institute of Mathematical Sciences, Princeton Department of Computer Science, Cornell Tech Jacobs Institute, and Harvard John A. Paulson School of Engineering and Applied Sciences. Outreach included internships, fellowships, and workshops similar to initiatives run by Google Summer of Code, Machine Learning Summer School, NeurIPS Summer Workshop, ICML Workshops, ACL Summer Schools, and public lectures at venues like New York Public Library and The New School. Community partnerships referenced collaborations with civic organizations such as NYC Mayor's Office, New York State Department of Education, and cultural partners like Lincoln Center.
Researchers contributed papers and software influencing conferences and journals such as NeurIPS, ICLR, ICML, ACL (conference), CVPR, ECCV, NAACL, EMNLP, SIGIR, KDD (conference), CHI (conference), and outlets like Nature (journal), Science (journal), Communications of the ACM, and Journal of Machine Learning Research. Contributions referenced methodologies related to BERT, Transformer (machine learning), Attention (machine learning), Word2Vec, GloVe, Beam search, Variational Autoencoder, Generative Adversarial Network, and datasets analogous to ImageNet, COCO (dataset), SQuAD, and Common Crawl. Impact narratives connected to product improvements and open-source releases similar to TensorFlow, Keras, TF Hub, Model Garden, and collaborations visible at venues like Open Data Institute and Association for Computing Machinery events.
Category:Google Category:Research institutes in New York City