LLMpediaThe first transparent, open encyclopedia generated by LLMs

CARLA (simulator)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: DeepMind Lab Hop 5
Expansion Funnel Raw 100 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted100
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
CARLA (simulator)
NameCARLA
TitleCARLA (simulator)
DeveloperIntel Corporation Toyota Research Institute NVIDIA Microsoft
Released2017
Latest release2021
Programming languageC++ Python
PlatformLinux Windows
LicenseMIT License

CARLA (simulator) is an open-source autonomous driving simulator originally released to accelerate research in urban driving, perception, planning, and control. It provides realistic rendering, sensor models, and scenario scripting to support evaluation of algorithms developed by researchers at institutions such as Massachusetts Institute of Technology, Stanford University, University of California, Berkeley, and Carnegie Mellon University. The project has seen contributions and usage across industries including Uber, Waymo, Ford Motor Company, and Tesla, Inc..

Overview

CARLA offers a virtual environment for testing autonomous systems using high-fidelity assets derived from collaborations with entities like Audi, Volkswagen, NVIDIA, and research groups at University College London. It targets tasks spanning perception, prediction, planning, and control with components comparable to those used in projects at Google DeepMind, OpenAI, MIT Media Lab, and ETH Zurich. CARLA supports integration with toolchains from ROS, PyTorch, TensorFlow, and Keras. Its scenes include urban layouts inspired by locations such as New York City, San Francisco, Berlin, and Barcelona, enabling experiments that relate to datasets like KITTI, Cityscapes, nuScenes, and Waymo Open Dataset.

Features and Architecture

The simulator's core is built on the Unreal Engine renderer and leverages middleware patterns similar to platforms developed by Autoware Foundation members and industry teams at Apple Inc. and NVIDIA Research. CARLA exposes sensor models for cameras, lidars, radars, GPS/IMU, and semantic segmentation akin to systems used in projects from Oxford University and Imperial College London. Its architecture includes a Python API inspired by interfaces from OpenAI Gym and synchronization mechanisms comparable to Apache Kafka streaming. Scenario definition uses behavior trees and finite-state machines analogous to approaches from Stanford Cartographer and DeepMind Lab.

Development and Releases

Initial development began in joint efforts between researchers affiliated with Toyota Research Institute and the Computer Vision Center at University of the Basque Country, with subsequent contributions modeled after practices at Microsoft Research and Facebook AI Research. Major releases introduced map editing tools influenced by workflows at Esri and asset pipelines similar to those at Epic Games. Versioning and continuous integration follow patterns from GitHub projects curated by communities like Linux Foundation and Apache Software Foundation. Academic papers describing the platform were presented at venues such as CVPR, ICRA, ICCV, and NeurIPS.

Use Cases and Applications

CARLA is used for development and validation in scenarios ranging from urban intersection negotiation to lane-change maneuvers, drawing parallels to experiments at Toyota Research Institute-Advanced Development and BMW Group Research. Researchers use it to benchmark perception stacks developed with frameworks from Intel and AMD hardware, and for reinforcement learning studies in the style of DeepMind and OpenAI Five. Autonomous vehicle startups like Cruise LLC, Aurora Innovation, and Zoox have employed virtual testing regimes inspired by regulatory approaches at National Highway Traffic Safety Administration and European Commission policy groups. It supports curriculum learning protocols related to trials at Carnegie Mellon University and sensor-failure studies analogous to tests by NASA research teams.

Evaluation and Benchmarks

CARLA enables reproducible benchmarks for tasks such as semantic segmentation, object detection, and end-to-end control, comparable to metrics defined in KITTI, COCO, ImageNet, and nuScenes. Leaderboards maintained by academic consortia echo evaluation strategies used at NeurIPS Competition Track and ICRA Challenges. Performance profiling aligns with tools from NVIDIA Nsight, Intel VTune, and Google Cloud Platform benchmarking suites. Safety-critical assessments draw on standards deliberated at SAE International and test methodologies referenced in publications from IEEE and ACM.

Community and Ecosystem

An active community of contributors includes researchers from University of Oxford, ETH Zurich, ETH Zurich Autonomous Systems Lab, Technische Universität München, Tsinghua University, Peking University, and industry practitioners from Daimler AG and General Motors. The ecosystem integrates with projects such as Autoware, Apollo, LG SVL Simulator, and toolchains like Gazebo, fostering interoperability with datasets like Oxford RobotCar Dataset and frameworks from ROCm and CUDA. Workshops and tutorials at conferences like IROS, ECCV, and RSS have expanded adoption, while educational courses at MIT, Stanford University, and ETH Zurich incorporate CARLA into curricula.

Category:Simulation software