Generated by GPT-5-mini| CARLA | |
|---|---|
| Name | CARLA |
| Developer | Intel Corporation; Toyota Research Institute; Computer Vision Center, Urbana–Champaign |
| Released | 2017 |
| Programming language | C++; Python (programming language) |
| Operating system | Linux; Windows |
| License | MIT License |
CARLA
CARLA is an open-source simulation platform for research in autonomous driving and robotics that provides high-fidelity urban environments, sensor models, and control interfaces. It was created to support reproducible experiments across laboratories and companies by offering a configurable virtual world with rich interaction among agents, vehicles, and pedestrians. The platform facilitates integration with frameworks and tools used in research and industry, enabling experiments spanning perception, planning, and control.
CARLA originated as a collaboration among academic and industrial groups to address the need for realistic, flexible simulation for autonomous vehicle research. It targets reproducible evaluation across institutions such as Massachusetts Institute of Technology, Stanford University, University of Cambridge, University of California, Berkeley, and research labs at Google, Microsoft Research, and NVIDIA. The platform emphasizes modularity to interoperate with toolchains from Robot Operating System, OpenAI, DeepMind, Waymo, and companies working on advanced driver-assistance systems like Tesla, Inc. and Mobileye. CARLA’s goals mirror those of simulation initiatives connected to projects at DARPA and benchmarks like the competitions run by AAAI and NeurIPS.
CARLA’s architecture separates world rendering, physics, and sensor emulation into distinct modules implemented with engines and middleware from established projects. The simulation core leverages technologies derived from the Unreal Engine for photorealistic rendering and from physics libraries used by Autodesk and NVIDIA PhysX for dynamics. The control and API layer exposes C++ and Python bindings compatible with ROS and reinforcement learning platforms such as TensorFlow, PyTorch, and RLlib. Scene assets and maps are authored with tools similar to those used at Epic Games and interoperable with geographic data sources like OpenStreetMap and GIS suites from Esri. Versioning and contributions follow patterns used on platforms such as GitHub and GitLab, enabling integration with continuous integration systems developed by teams at Travis CI and Jenkins.
CARLA provides configurable sensor suites including camera models, lidar, radar, GNSS, and IMU simulations with parameters inspired by measurement systems from Velodyne, Ouster, Bosch, Continental AG, and Bosch Sensortec. The platform supports weather and lighting effects comparable to codecs and render pipelines used by Industrial Light & Magic and compositing tools like Nuke (software), while traffic and pedestrian behavior draw on models from urban studies at MIT Media Lab and mobility research at MIT Senseable City Lab and UC Berkeley Transportation Engineering. API features include synchronous/asynchronous control loops suited to algorithms used in work by Sebastian Thrun, Chris Urmson, and teams at Uber ATG, with telemetry and logging compatible with datasets such as KITTI, nuScenes, Waymo Open Dataset, and Cityscapes.
Researchers deploy CARLA for tasks in perception, sensor fusion, mapping, and planning used by groups at Carnegie Mellon University, Oxford University, ETH Zurich, Imperial College London, and companies like Ford Motor Company and General Motors. It supports development of end-to-end learning systems championed by researchers at DeepMind and OpenAI, as well as modular pipelines used by enterprises including Aptiv and Uber. CARLA is used in academic courses and challenges organized by IEEE and ACM, and in safety validation studies influenced by standards from ISO and regulatory work at National Highway Traffic Safety Administration. Startups in robotics and mapping such as Zoox, Aurora Innovation, and HERE Technologies also use simulated scenarios for testing.
Benchmarking in CARLA involves repeatable scenario suites and metrics for collision rates, route completion, and comfort measures similar to evaluations used in DARPA Grand Challenge and the KITTI Vision Benchmark Suite. Community-driven leaderboards and challenge tracks echo formats from ImageNet and COCO competitions, while statistical analysis methods borrow from papers published at CVPR, ICCV, ECCV, ICRA, and IROS. Comparative studies often pit models trained in CARLA against real-world datasets like Oxford RobotCar Dataset and validation frameworks used in work by Waymo and Tesla, informing deployment strategies discussed in venues such as IEEE Intelligent Vehicles Symposium.
An active ecosystem surrounds CARLA, including contributors from universities, research labs, and industry partners such as Toyota Research Institute, Intel Corporation, and independent developers collaborating via repositories hosted on GitHub. The user community organizes workshops and challenges at conferences like NeurIPS, ICRA, CVPR, and meetings of professional societies including IEEE Robotics and Automation Society and Association for the Advancement of Artificial Intelligence. Third-party integrations and tooling are provided by companies and projects such as LG Electronics, NVIDIA, Microsoft Azure, and cloud services from Amazon Web Services for scalable simulation. Educational initiatives at institutions like University of Toronto, Princeton University, and Tsinghua University incorporate CARLA into curricula and capstone projects.
Category:Simulation software Category:Autonomous driving