Generated by GPT-5-mini| Autonomous system | |
|---|---|
| Name | Autonomous system |
| Type | Concept |
| Field | Robotics; Computer science; Control theory; Artificial intelligence; Networking |
Autonomous system An autonomous system denotes an engineered assemblage capable of goal-directed operation with reduced human intervention, integrating sensing, decision-making, and actuation. It appears across disciplines from John McCarthy's early Artificial intelligence research to DARPA challenges and commercial deployments by Tesla, Inc., Boston Dynamics, DJI and Waymo. Autonomous systems intersect developments in Control theory, Machine learning, Computer vision, Robotics and Systems engineering, and are governed by standards and norms from bodies such as IEEE and ISO.
An autonomous system is defined as a system that perceives its environment, reasons about objectives, plans actions, and executes behaviors with varying degrees of human oversight. Historical milestones include work at MIT on autonomous agents, the Stanford Cart, and the DARPA Grand Challenge; contemporary instantiations range from Surgical Robotics platforms at Intuitive Surgical to unmanned aerial vehicles by Lockheed Martin and Northrop Grumman. The scope spans physical robots, Autonomous vehicles, software agents in Distributed systems, and networked infrastructures managed by operators such as AT&T and Google. Standards-setting organizations like ISO/TC 299 and consortia such as SAE International influence definitions and capability levels used in industry and regulation.
Architectures typically combine perception, cognition, planning, and actuation subsystems tied together by middleware and communication stacks. Perception modules rely on sensors produced by firms like Sony, Bosch, Velodyne and algorithms from labs at Carnegie Mellon University and University of California, Berkeley for Computer vision, Lidar processing, and sensor fusion. Cognition uses models and frameworks from DeepMind, OpenAI, and academic groups employing Reinforcement learning, Bayesian networks, or symbolic planners derived from work by Herbert A. Simon and Allen Newell. Planning and control include trajectory optimization influenced by Richard Bellman's dynamic programming and control algorithms from Kalman filter theory; actuation integrates hardware from vendors like Siemens and Honeywell. Middleware examples include ROS and proprietary frameworks used by Toyota Research Institute and NVIDIA. Networking and cybersecurity components borrow protocols and practices from IETF and NIST.
Taxonomies classify systems by autonomy level, domain, and architecture. The SAE International J3016 levels inform vehicle autonomy categories used by General Motors and Ford Motor Company. Domains include aerial systems (UAVs) used by Boeing and Airbus, maritime unmanned surface vehicles in programs by DARPA and US Navy, industrial manipulators popularized by KUKA and ABB, and software agents deployed by Amazon and Microsoft Azure. Architectures vary: centralized command-and-control used in legacy NASA missions, decentralized multi-agent systems evidenced in swarm robotics research at Cornell University and EPFL, and hybrid human-in-the-loop designs adopted by NATO partners. Classification also covers autonomy modes like adaptive, reactive, deliberative, and hybrid, each rooted in cognitive architectures from John Laird and Paul Rosenbloom.
Applications span transportation, defense, healthcare, logistics, agriculture, and entertainment. In transportation, Waymo, Uber Technologies, Inc., and Daimler AG develop self-driving taxis and trucks; defense adopters include Lockheed Martin and Raytheon Technologies for ISR and force protection. Healthcare applications involve robotic surgery by Intuitive Surgical and assistive devices from Philips and research at Johns Hopkins University. Logistics and warehousing see deployments by Amazon Robotics and Kiva Systems derivatives; precision agriculture uses platforms from John Deere and startups incubated at Y Combinator. Entertainment and service robots include creations by Hanson Robotics and SoftBank Robotics. Smart infrastructure use cases integrate autonomous management in projects linked to Siemens and Huawei research initiatives.
Safety engineering and ethical frameworks are shaped by incidents and policy responses from institutions like European Commission, U.S. Department of Transportation, and World Health Organization. Regulatory regimes reference UNECE regulations for vehicle safety, FAA rules for unmanned aircraft, and privacy directives influenced by European Court of Human Rights jurisprudence. Ethical concerns engage scholars and organizations such as Peter Singer-influenced debates, the Asilomar AI Principles community, and advisory panels to White House administrations. Liability, transparency, explainability, and bias mitigation are dealt with through standards from IEEE Standards Association and guidance from NIST and national data protection authorities such as CNIL.
Core challenges include safe perception under adversarial conditions studied by researchers at MIT and University of Oxford, robust decision-making under uncertainty advanced by teams at DeepMind and Stanford University, validation and verification methods championed by NASA and Carnegie Mellon University, and secure communications influenced by DARPA programs. Research directions emphasize explainable AI from groups at UC Berkeley and Cambridge University, formal methods integration pursued by Microsoft Research and ETH Zurich, and scalable multi-agent coordination developed in labs at Imperial College London and Georgia Institute of Technology. Cross-disciplinary initiatives link law scholars at Harvard Law School with engineers at Caltech to address governance, while consortia like Partnership on AI coordinate industry research and policy engagement.