Generated by DeepSeek V3.2| Photogrammetry | |
|---|---|
| Name | Photogrammetry |
| Classification | Remote sensing, Computer vision, Geodesy |
| Inventor | Aimé Laussedat |
| Related | Stereoscopy, Lidar, Structure from motion |
Photogrammetry is the science of making measurements from photographs, particularly for recovering the precise positions of surface points. It is a core technique within the fields of remote sensing, computer vision, and geodesy, enabling the creation of maps, 3D models, and detailed spatial data. The discipline intersects with technologies like lidar and methods such as stereoscopy to extract quantitative information from imagery captured by devices ranging from handheld cameras to satellite platforms.
The fundamental output of photogrammetric processes is typically accurate geometric data, which can be rendered as orthophotos, digital elevation models, or detailed 3D reconstructions of objects and terrains. This data is vital for applications in topography, archaeology, architecture, and film production. The field is historically rooted in analog photography but has been revolutionized by digital imaging and advanced computational algorithms, allowing for automated processing of vast image datasets. Key professional bodies that advance the discipline include the American Society for Photogrammetry and Remote Sensing and the International Society for Photogrammetry and Remote Sensing.
Core principles rely on the mathematical relationships defined by collinearity condition equations and bundle adjustment, which solve for camera positions and object point coordinates simultaneously. Aerial photogrammetry often utilizes vertical aerial photographs with high overlap, processed through stereo plotting to generate contour lines. Close-range photogrammetry employs convergent imagery of smaller subjects. Modern computational photogrammetry frequently uses structure from motion algorithms, which automatically identify tie points across multiple images to solve for camera calibration and sparse point cloud generation. This is often followed by multi-view stereo algorithms to produce dense surface models.
In topographic mapping, agencies like the United States Geological Survey and Ordnance Survey use it to produce national map series. The construction and mining industries apply it for volumetric analysis and monitoring earthworks, as seen in projects like the Channel Tunnel. Within cultural heritage, institutions such as the Getty Conservation Institute have documented sites like Petra and Machu Picchu. The entertainment industry leverages it for creating digital assets in films such as The Matrix and video games like The Last of Us. It is also critical for crash scene investigation by the National Transportation Safety Board and in precision agriculture for crop health assessment.
The origins trace back to the mid-19th century with the work of Frenchman Aimé Laussedat, often called its father, who used kites and balloons for terrestrial photography. Significant advancement came with the development of the stereoplotter by Carl Pulfrich at Carl Zeiss AG, enabling efficient map creation from aerial photography pioneered in World War I by aviators like O. G. S. Crawford. The post-World War II era saw the rise of analytical plotters, leading to the fully digital systems of the late 20th century. The launch of the CORONA spy satellite program and later commercial satellites like IKONOS expanded its reach into space-based imaging.
Commercial software suites are led by products from Hexagon AB, including ERDAS IMAGINE and Leica Photogrammetry Suite, and Bentley Systems' ContextCapture. Open-source options include OpenDroneMap and the Multi-View Environment library. Pix4D and Agisoft Metashape are prominent in the UAV and close-range markets. Processing often leverages GPU acceleration from companies like NVIDIA. Specialized hardware includes metric cameras from Leica Geosystems and Vexcel Imaging, as well as unmanned aerial vehicle platforms from DJI and SenseFly.
Accuracy can be compromised by poor image resolution, inadequate camera calibration, or insufficient image network geometry, such as a lack of convergent photography. Processing is computationally intensive, requiring significant RAM and storage, especially for projects like scanning the Palmyra arch. Environmental factors like cloud cover, shadows, and homogeneous surfaces (e.g., water, sand) can prevent successful feature matching. Legal and ethical challenges arise concerning privacy when surveying urban areas and the use of drones, regulated by bodies like the Federal Aviation Administration. It also faces competition from active sensors like lidar, which perform better in vegetated areas, as demonstrated in surveys of the Amazon rainforest.
Category:Geodesy Category:Remote sensing Category:3D imaging