Generated by GPT-5-mini| Intrusion detection systems | |
|---|---|
| Name | Intrusion detection systems |
| Invented | 1980s |
| Inventor | Clifford Stoll; Dorothy E. Denning |
| Type | Computer security |
Intrusion detection systems
Intrusion detection systems provide monitoring and analysis tools designed to identify unauthorized access, misuse, or anomalous activity within ARPANET, Internet, LANs, and WANs. Originating from academic work by Clifford Stoll and Dorothy E. Denning and commercialized alongside products from IBM, Cisco Systems, and McAfee, these systems integrate with UNIX and Microsoft Windows environments and inform incident response by agencies such as the National Security Agency and European Union Agency for Cybersecurity. They operate across enterprise deployments for Bank of America, Walmart, and Amazon (company) as well as in critical infrastructure run by Department of Homeland Security and United Kingdom National Cyber Security Centre.
IDS monitor network traffic, host activity, and application logs to detect suspicious behavior that may indicate threats involving actors like Anonymous (hacker collective), state actors such as Advanced Persistent Threat groups, or malware families like Stuxnet, WannaCry, and Mirai (malware). They complement defensive tools including firewalls, SIEM platforms from Splunk, and endpoint protection from Symantec and Kaspersky Lab. Early architectures trace to research at SRI International and systems influenced by standards from National Institute of Standards and Technology and protocols such as TCP/IP and SNMP.
IDS are commonly categorized by placement and scope: network-based devices deployed at gateway points used by Cisco Systems and Juniper Networks; host-based agents integrated with Microsoft Windows Server or Linux distributions maintained by organizations like Red Hat and Canonical (company); and distributed or hybrid systems used by Google and Facebook. Architectural models include signature-driven appliances from vendors like McAfee and Trend Micro, anomaly-focused research systems from MIT and Carnegie Mellon University, and protocol-aware sensors developed in collaboration with IETF working groups. Scalable deployments leverage orchestration tools such as Kubernetes (software) and Docker (software), and are designed to interoperate with identity providers like Okta and LDAP.
Signature-based detection relies on pattern libraries and rule sets such as those from Snort and Emerging Threats, drawing on indicators of compromise first cataloged in studies by CERT Coordination Center at Carnegie Mellon University. Anomaly detection uses statistical models, machine learning techniques from Stanford University and Massachusetts Institute of Technology, and algorithms developed in publications by Yann LeCun and Geoffrey Hinton to profile baseline behavior. Stateful protocol analysis inspects sequences in HTTP and SMTP traffic against formal grammars proposed in IETF RFCs. Correlation engines integrate alerts using techniques pioneered in projects at Lawrence Berkeley National Laboratory and Sandia National Laboratories, while sandboxing and dynamic analysis employ environments like Cuckoo Sandbox and virtualization from VMware and Xen (hypervisor).
Operational deployment involves placement at choke points in networks operated by AT&T and Verizon Communications, endpoint agents pushed via management systems from Microsoft System Center and Puppet (software). Tuning requires signature updates distributed similarly to Microsoft Update and threat intelligence shares coordinated through FIRST (organization) and Information Sharing and Analysis Center groups spanning sectors like finance led by Financial Services Information Sharing and Analysis Center and healthcare coordinated with HITRUST. Incident response workflows often follow playbooks influenced by frameworks from MITRE Corporation (including MITRE ATT&CK), reporting standards like Common Event Format and STIX, and escalation channels to law enforcement such as Federal Bureau of Investigation cyber units.
IDS performance is measured using detection rate, false positive rate, and computational overhead observed in benchmarks from NIST and evaluations published in conferences like USENIX Security Symposium, IEEE Symposium on Security and Privacy, and ACM Conference on Computer and Communications Security. Test datasets include corpora derived from exercises at DARPA and traces analyzed in academic work at University of California, Berkeley and University of Cambridge. Metrics also consider mean time to detect (MTTD) and mean time to respond (MTTR) aligned with service level agreements used by Accenture and Deloitte in managed security services.
Deployment raises legal and privacy issues governed by statutes and directives such as the General Data Protection Regulation and the Computer Fraud and Abuse Act, with oversight from bodies like the European Commission and national data protection authorities including the Information Commissioner's Office (UK). Ethical questions involve lawful interception rules related to decisions by courts like the United States Court of Appeals and policy guidance from Council of Europe committees. Compliance and auditing rely on frameworks like ISO/IEC 27001 and reporting to regulators including the Securities and Exchange Commission when incidents affect publicly traded firms such as Microsoft Corporation and Apple Inc..
Category:Computer security