Generated by GPT-5-mini| Software Agents | |
|---|---|
| Name | Software agents |
| Domain | Computer science, Artificial intelligence, Multi-agent systems |
Software Agents
Software agents are computer programs that perform tasks autonomously on behalf of users or other programs. They interact with software, networks, and users to perceive environments, make decisions, and act to achieve goals; they draw upon research in Turing, Minsky, McCarthy, Wiener, Russell and institutions such as Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University and University of California, Berkeley.
Software agents encompass autonomous programs developed in the tradition of Artificial intelligence, Distributed computing, Human–computer interaction and Robotics research. Early work at MIT AI Lab and SRI International produced foundational prototypes alongside projects at DARPA and European Commission initiatives. Commercialization across firms such as IBM, Microsoft, Google, Amazon and Apple popularized agent concepts in products like virtual assistants and recommender systems. Major conferences include IJCAI, AAAI, AAMAS and NeurIPS.
Agents are classified along autonomy, mobility, learning, reactivity and social ability axes defined in literature from Wooldridge and Jennings. Types include simple reflex agents, model-based agents, goal-based agents and utility-based agents, often differentiated in textbooks by Russell and Norvig. Other classes are mobile agents (deployed in Distributed systems and networked environments), intelligent agents used in Data mining and web agents used for crawling by organizations like Alexa Internet and search systems from Google. Multi-agent systems (MAS) involve agent societies studied at EUMAS and utilized in domains such as electronic trading on platforms like NASDAQ.
Architectural paradigms include layered, event-driven, blackboard, reactive subsumption and deliberative architectures influenced by work at SRI International, MIT Media Lab and Xerox PARC. Middleware standards such as those from FIPA and platforms like JADE and Microsoft Bot Framework support interoperability. Agent communication languages such as KQML and ACL were developed in projects led by researchers at DARPA and implemented in systems by Sun Microsystems. Design patterns borrow from software engineering traditions at Bell Labs and AT&T while incorporating protocols used in IETF work on networking.
Machine learning methods—reinforcement learning, supervised learning, unsupervised learning and deep learning—are integrated into agent control loops with advances from groups at DeepMind, OpenAI, DeepMind and universities including University of Toronto. Reinforcement learning algorithms like Q-learning and policy gradients, developed in work by Watkins and Sutton, enable agents to optimize cumulative rewards in environments modeled after benchmarks from OpenAI Gym and Atari 2600 evaluations. Autonomy research intersects with cognitive science labs at Yale and Oxford exploring decision-making, situated cognition and human-agent teaming for missions funded by NASA and European Space Agency.
Software agents are deployed across sectors: virtual assistants in products by Apple and Amazon; recommendation engines at Netflix and Spotify; automated trading agents on exchanges such as NYSE and NASDAQ; autonomous systems in projects by Boeing and Lockheed Martin; and intelligent tutoring systems developed at institutions like Carnegie Mellon University and University of Illinois Urbana-Champaign. Other applications include smart grids managed by utilities modeled with participation from Siemens and General Electric, supply-chain optimization for firms such as Walmart and Maersk, and cybersecurity agents used by Cisco Systems and Palo Alto Networks.
Ethical debates involve panels and reports from European Commission, United Nations, IEEE, NIST and oversight bodies in jurisdictions including United States, European Union and United Kingdom. Legal issues intersect with legislation such as laws debated in United States Congress and regulatory action by agencies like FTC over privacy and accountability. Security concerns range from adversarial attacks researched at UC Berkeley and Carnegie Mellon University to supply-chain threats investigated by DHS. Ethical frameworks proposed by committees at Oxford, Harvard and Stanford address bias, transparency and responsibility in deployments for healthcare overseen by institutions like World Health Organization and in autonomous vehicles regulated by agencies such as NHTSA.
Evaluation uses benchmarks and metrics created by communities at NeurIPS, ICLR and AAAI, and datasets maintained by ImageNet and UCI Machine Learning Repository. Performance measures include task success rates, cumulative reward, latency, throughput and scalability assessed on infrastructure by AWS, GCP and Microsoft Azure. Multi-agent evaluation involves game-theoretic metrics from work by Nash and empirical tournaments run by labs at OpenAI and universities collaborating with industry partners such as DeepMind.