Generated by GPT-5-mini| ab (ApacheBench) | |
|---|---|
| Name | ApacheBench |
| Developer | Apache Software Foundation |
| Released | 1996 |
| Latest release | Varies with Apache HTTP Server |
| Operating system | Cross-platform |
| License | Apache License 2.0 |
| Website | Apache HTTP Server Project |
ab (ApacheBench)
ab is a command-line benchmarking tool bundled with the Apache HTTP Server project used to measure HTTP server performance. It produces request-rate and latency measurements by generating concurrent HTTP requests against a target URL and reporting aggregated statistics such as requests per second and time per request. System administrators from organizations like Mozilla to teams at Facebook and Google have historically used similar load-testing utilities alongside tools like JMeter, wrk, and Siege to validate web server configurations involving nginx and Lighttpd.
ab was created as a lightweight tester for load and concurrency by the Apache Software Foundation to accompany the Apache HTTP Server documentation and distribution. It operates from a single host to flood a specified URL with a specified number of requests and concurrency level, providing an immediate snapshot of throughput and latency under synthetic load. Administrators and researchers at institutions such as MIT, Stanford University, Cambridge University and companies like Amazon Web Services or Microsoft may use ab when comparing Linux kernel tuning, or transport-layer changes involving TCP stacks and OpenSSL parameterization. ab integrates with systems managed by orchestration tools like Ansible or Puppet for automated testing pipelines.
ab supports options for request count, concurrency, request method selection, headers, request body content, and SSL/TLS testing. Common flags include setting the total number of requests and concurrent clients to simulate load similar to scenarios encountered by services at Twitter, LinkedIn, and Dropbox. ab can send custom headers for authentication schemes used by OAuth, range requests for media servers like Netflix-scale deployments, and can benchmark HTTPS endpoints using the OpenSSL libraries found in distributions used by Debian and Red Hat Enterprise Linux. Output includes aggregate statistics familiar to performance engineers with backgrounds at Intel, AMD, or Cisco Systems who tune web-facing infrastructure.
Basic invocation targets a single endpoint to measure throughput comparable to simple tests performed by teams at NASA or European Space Agency: ab -n 1000 -c 50 http://example.com/ For HTTPS testing against services similar to those operated by PayPal or Stripe: ab -n 500 -c 20 -k https://example.com/ To POST JSON payloads used by APIs comparable to GitHub or Spotify: ab -n 200 -c 10 -p payload.json -T application/json http://api.example.com/endpoint In continuous integration pipelines used by GitLab or Jenkins, ab invocations are often wrapped by scripts that collect metrics for dashboards powered by Grafana and Prometheus.
ab reports metrics such as Requests per second, Time per request (mean and across concurrency), Transfer rate, and distribution of connection times. Interpreting these requires awareness of scheduling and networking behavior studied by researchers at Bell Labs and University of California, Berkeley. High Requests per second with low Time per request indicate efficient throughput as seen in optimizations by Cloudflare or Akamai; conversely, high standard deviations often reflect queuing as analyzed in work from Queueing theory pioneers at Bell Labs and Princeton University. Transfer rate should be correlated with network-capacity metrics from vendors like Cisco Systems and Juniper Networks and with disk I/O behavior discussed in literature from Seagate and Western Digital when benchmarking content delivery systems.
ab is single-process and single-host, limiting its ability to generate geographically distributed or very large-scale loads comparable to global tests run by Akamai or Cloudflare. It lacks rich scripting and protocol-level manipulation found in Apache JMeter and Gatling, and does not natively produce the time-series telemetry formats used by Prometheus or OpenTelemetry. For TLS and HTTP/2 specific tests, tools like h2load from the nghttp2 project, wrk by Will Glozer or k6 (formerly by LoadImpact) provide richer feature sets. Large-scale distributed testing platforms provided by BlazeMeter or Flood IO address limitations of single-host generators.
ab traces its lineage to the early Apache HTTP Server releases in the 1990s when performance testing utilities were bundled to validate server changes alongside work by contributors from organizations such as Netscape and O'Reilly Media. Maintenance has followed the development cadence of the Apache Software Foundation and the HTTP protocol evolution, intersecting with standards from the Internet Engineering Task Force and TLS work by the Internet Engineering Task Force's TLS working groups. Community contributions and bug reports often originate from users at technology firms like Red Hat and Canonical.
Using ab against production endpoints can mimic distributed denial-of-service patterns and must be coordinated with network operations teams at enterprises like AT&T and Verizon to avoid unintended outages. ab does not include built-in rate-limiting policies or authentication workflows comparable to solutions from Okta or Auth0 and so may expose systems to load if misused. For TLS verification and cipher negotiation, administrators should ensure the underlying OpenSSL or LibreSSL stacks are up to date per guidance from CERT and security advisories by vendors such as Microsoft and Apple. When integrating ab into test suites for critical infrastructure managed by Department of Defense-affiliated contractors or major cloud providers like Google Cloud Platform, ensure compliance with policies from regulators such as FCC where applicable.