LLMpediaThe first transparent, open encyclopedia generated by LLMs

Laurens van der Maaten

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 71 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted71
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Laurens van der Maaten
NameLaurens van der Maaten
FieldsMachine learning, Artificial intelligence, Data visualization
WorkplacesDelft University of Technology; University of California, San Diego; Tilburg University
Alma materDelft University of Technology; University of Amsterdam
Known fort-SNE, dimensionality reduction, representation learning

Laurens van der Maaten is a researcher in machine learning and data visualization known for developing algorithms for dimensionality reduction and representation learning. His work has influenced fields ranging from computer vision and natural language processing to bioinformatics, and has been applied in industry and academia. He has held academic posts in Europe and North America and collaborated with researchers at leading institutions and technology companies.

Early life and education

Born and raised in the Netherlands, he studied engineering and computer science at Delft University of Technology and completed graduate work at the University of Amsterdam under advisors with links to research groups at CWI and collaborations involving Max Planck Society researchers. During his doctoral studies he focused on probabilistic models and optimization methods, engaging with scholars connected to INRIA, University of Oxford, ETH Zurich, and Carnegie Mellon University. Early influences included work from researchers at Google Research, Microsoft Research, Facebook AI Research, and groups around Stanford University, while he participated in workshops hosted by NeurIPS, ICML, and CVPR.

Research and career

His career includes faculty and research positions at TU Delft, Tilburg University, and University of California, San Diego, as well as visiting collaborations with teams at Google DeepMind, OpenAI, Amazon Web Services, and corporate labs tied to IBM Research and Apple Inc.. He is best known for contributions to dimensionality reduction, notably algorithms that map high-dimensional data to low-dimensional representations, building on earlier methods from Geoffrey Hinton, Laurent Saul, Sam Roweis, and work presented at COLT and UAI. His research spans applications in computer vision benchmarks such as ImageNet and CIFAR-10, and in natural language settings involving datasets from ACL venues and corpora used at EMNLP and NAACL.

He has organized tutorials and workshops at major conferences including NeurIPS, ICML, ECCV, and ICCV, and served on program committees for KDD and SIGIR. His lab has collaborated with researchers at Broad Institute and hospitals participating in consortia like Human Cell Atlas to apply visualization techniques to single-cell sequencing and biomedical imaging datasets.

Publications and notable contributions

He is widely cited for an algorithm that improves visualization quality of embeddings and preserves local and global structure, building conceptually on stochastic neighbor embedding approaches introduced by researchers at University of Toronto and developed in contexts discussed at NIPS meetings. His publications appear in proceedings of NeurIPS, ICML, CVPR, ICLR, and journals associated with IEEE and ACM. He has released open-source implementations used in libraries alongside projects from scikit-learn, TensorFlow, PyTorch, and packages maintained by developers affiliated with Anaconda, Inc. and GitHub.

Other contributions include work on metric learning, approximate nearest neighbors linked to methods from Spotify and Pinterest engineers, and methods for accelerating large-scale embedding visualization comparable to techniques from Facebook AI Research and Google AI. He has collaborated on interdisciplinary studies with groups at MIT, Harvard University, Yale University, and Utrecht University applying visualization tools to neuroscience datasets and genomics projects presented at RECOMB and ISMB meetings.

Awards and honors

His work has received recognition in citations, best-paper nominations at venues such as NeurIPS and ICML, and invitations to give talks at institutes including Caltech, Princeton University, Imperial College London, and ETH Zurich. He has been granted research funding from national agencies comparable to NWO and participated in EU programs similar to Horizon 2020 collaborations. He has been listed among contributors to influential open-source projects acknowledged by industry summits hosted by OCaml Labs and technology conferences organized by SIGGRAPH panels.

Personal life and advocacy

Outside academia, he has been involved in mentoring initiatives connected to student chapters at IEEE and ACM, and has spoken about reproducible research at platforms associated with arXiv and community efforts around OpenAI policy discussions. He participates in outreach with organizations affiliated with Data for Good initiatives and has advised startups incubated through accelerators linked to Y Combinator and university technology transfer offices.

Category:Machine learning researchers Category:Dutch computer scientists