Generated by GPT-5-mini| GMapping | |
|---|---|
| Name | GMapping |
| Author | Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard |
| Released | 2004 |
| Programming language | C++ |
| License | LGPL |
| Platform | Linux, ROS |
GMapping is a widely used simultaneous localization and mapping (SLAM) algorithm for mobile robotics developed in the early 2000s. It provides grid-based mapping by combining particle filters, scan matching, and occupancy grids for laser-equipped platforms. The implementation became a staple in research and robotics systems, integrated into robotics middleware and tested on diverse robotic platforms.
GMapping was introduced by researchers associated with institutions such as the University of Freiburg, the University of Bonn, and the University of Technology of Munich, and presented in venues like the IEEE International Conference on Robotics and Automation. The technique builds on probabilistic robotics foundations laid out by authors from institutions including Stanford, MIT, Carnegie Mellon University, and the University of Oxford. Its software implementation has been distributed alongside middleware projects such as ROS and components from Willow Garage, enabling use in mobile robots from manufacturers like Clearpath Robotics, KUKA, and iRobot.
The core algorithm employs a Rao-Blackwellized particle filter, an approach with connections to work by researchers at Carnegie Mellon University, Stanford University, and the University of Bonn. Particles each maintain a trajectory hypothesis and an associated occupancy grid map using grid representations akin to methods used at ETH Zurich and the Massachusetts Institute of Technology. Laser scan matching leverages techniques comparable to those used in algorithms validated at institutions such as the University of Oxford and the University of Cambridge. The implementation in C++ interfaces with ROS stacks developed by Willow Garage and OSRF, and relies on libraries and standards influenced by projects from Google, Microsoft Research, and Amazon Robotics for serialization and data handling.
GMapping has been applied in indoor service robotics on platforms from companies like iRobot and Blue River Technology, in research deployments at laboratories such as MIT CSAIL and LTI at EPFL, and in industrial automation by integrators including KUKA and ABB. Autonomous vehicle labs at universities such as UC Berkeley, ETH Zurich, and TU Darmstadt have employed it for initial mapping experiments. Field deployments have been demonstrated in projects coordinated by NASA, DARPA, and the European Space Agency for exploratory robotics, and in commercial robotics ecosystems including Amazon Robotics and Fetch Robotics for warehouse mapping.
Benchmarking studies by groups at University of Oxford, MIT, and INRIA have compared the algorithm's performance against alternatives produced by teams at Google DeepMind, Microsoft Research, and the University of Michigan. GMapping performs robustly for 2D LiDAR-equipped robots in structured environments such as those encountered in research labs at Carnegie Mellon University, industrial sites managed by Siemens, and public spaces surveyed by projects at ETH Zurich. Limitations surface in large-scale outdoor environments explored by teams at NASA JPL and the Jet Propulsion Laboratory, where GPS-denied SLAM challenges favor solutions from institutions like Stanford and UCLA. The algorithm can struggle with loop closure complexity addressed by methods from the University of Zaragoza and the University of Freiburg, and with multi-sensor fusion advancements originating from ETH Zurich and the University of Toronto.
Extensions and forks have been developed by contributors from universities and companies such as the University of Bonn, Willow Garage, Clearpath Robotics, and Open Robotics. Variants incorporate ideas from graph-based SLAM research by researchers at ETH Zurich and INRIA, and integrate visual odometry techniques advanced at Oxford and Carnegie Mellon University. Hybrid systems combine GMapping-derived occupancy grids with semantic mapping work from MIT, object-recognition modules from Google Brain, and global pose optimization from researchers at EPFL and KU Leuven. Community contributions have brought compatibility layers for middleware from ROS 1 to ROS 2 maintained by Open Robotics and the ROS Industrial consortium.
To deploy the implementation, developers typically build the C++ code on Linux distributions favored by robotics labs such as Ubuntu LTS releases used at Stanford, MIT, and the Technical University of Munich, and integrate it with ROS packages maintained by Willow Garage and Open Robotics. Common workflows involve sensor stacks from SICK, Hokuyo, and Velodyne for LiDAR, and hardware platforms produced by Clearpath Robotics, PAL Robotics, and Fetch Robotics. Integration with mapping and navigation stacks uses concepts established by researchers at Carnegie Mellon University, ETH Zurich, and the University of Oxford, enabling path planning modules influenced by work from Google, Microsoft Research, and the NavLab projects.