LLMpediaThe first transparent, open encyclopedia generated by LLMs

wrk (software)

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: Nginx Hop 3
Expansion Funnel Raw 78 → Dedup 18 → NER 8 → Enqueued 8
1. Extracted78
2. After dedup18 (None)
3. After NER8 (None)
Rejected: 10 (not NE: 10)
4. Enqueued8 (None)
wrk (software)
Namewrk
Titlewrk
DeveloperAngelos
Released2011
Latest releasevar.
Programming languageC, Lua
Operating systemLinux, macOS, FreeBSD
LicenseMIT
GenreHTTP benchmarking

wrk (software) is a modern HTTP benchmarking tool designed to generate significant load for testing web server performance and network throughput. It combines a multithreaded design with a scalable event notification model to exercise HTTP/1.1 stacks, evaluate reverse proxy configurations, and probe application server behavior under stress. Widely cited in operational and academic benchmarking, it is used alongside other tools in performance evaluations for Nginx, Apache HTTP Server, HAProxy, and cloud services.

Overview

wrk originated to address limitations in older load generators such as ab (ApacheBench), siege (software), and httperf. It leverages low-level system interfaces like epoll on Linux, kqueue on FreeBSD, and poll on POSIX systems to multiplex connections efficiently. The project is associated with performance-oriented discussions in communities around Redis, PostgreSQL, MySQL, and MongoDB deployments where HTTP front ends and API gateways are benchmarked. Contributors and users often compare its output with results from JMeter, Locust (software), Gatling (software), and wrk2 in publications and blog posts.

Features

wrk provides features catering to both simple and advanced scenarios: high concurrency, request pipelining, and customizable request generation. It includes scripting support via Lua (programming language) for per-request customization, enabling dynamic headers, payloads, and response handling. Built-in latency sampling and statistics complement external observability tools like Prometheus, Grafana, and InfluxDB. Integration patterns often pair wrk with systemtap, perf (Linux tool), and eBPF-based tracers for deep profiling. Users report deployments in benchmarking stacks with Docker, Kubernetes, OpenStack, and Amazon Web Services environments.

Architecture and Implementation

The core is implemented in C and designed around evented I/O: worker threads create a pool of nonblocking sockets, use epoll/kqueue to monitor readiness, and issue asynchronous writes and reads to sustain load. Lua scripting hooks are embedded via the Lua C API, enabling per-thread state and request templating used in comparisons with wrk2, which emphasizes constant throughput. Memory management practices mirror patterns from projects like Nginx and HAProxy, and TLS support is often layered through external wrappers such as stunnel or via builds linked with OpenSSL. Cross-platform portability considerations echo those in Git's portability layers and CMake-based projects.

Usage and Examples

Typical command-line invocation targets an endpoint and specifies thread and connection counts: users run experiments against endpoints like REST API proxies, static hosts on Content Delivery Network endpoints, or GraphQL servers. Lua scripts manipulate headers for authentication schemes such as OAuth 2.0 or JWT, and can emulate client behaviors observed in nginx access logs or HAProxy traces. Example workflows integrate wrk runs with continuous integration systems like Jenkins, Travis CI, GitHub Actions, or GitLab CI to gate performance regressions. Results are often correlated with system metrics collected by Collectd, Telegraf, or Datadog.

Performance and Benchmarks

Benchmarks using wrk appear in evaluations comparing Nginx with Apache HTTP Server, and in research on event-driven servers versus thread-per-connection models as seen in studies involving Node.js, Go (programming language), and Rust (programming language) servers. Its ability to saturate single-node NICs is used in studies of TCP stack tuning, SO_REUSEPORT strategies, and kernel bypass techniques like DPDK. Critics note that single-client generators can be bounded by CPU, network stack, or kernel limits, prompting multi-host orchestrations using Ansible or SaltStack to coordinate distributed tests. Comparative evaluations factor in latency percentiles (p50, p95, p99) and throughput, often plotted with Matplotlib or displayed in Grafana dashboards.

Development and Extensibility

Development typically occurs in public repositories and forks that extend functionality for TLS, HTTP/2, or gRPC traffic. Community patches introduce features inspired by wrk2 and by integrations with OpenTelemetry for tracing. Build systems and contributions reference Makefile conventions and continuous integration checks similar to those used by Linux Kernel subsystem projects. Extensibility paths include adding protocol modules, embedding alternative scripting engines like LuaJIT, or integrating with language-specific test harnesses such as pytest or Go testing packages.

Licensing and Reception

Distributed under the MIT License, the software is permissively licensed and has been adopted by enterprises, academic groups, and open source practitioners. It is frequently cited in blog posts from companies like Google, Facebook, Netflix, and LinkedIn when discussing performance engineering. Reviews emphasize its simplicity, low overhead, and scriptability compared to heavier frameworks like Apache JMeter and Gatling (software), while noting limitations for multi-protocol benchmarks that tools like tsung or siege (software) address.

Category:Benchmarking software Category:Free benchmarking tools Category:Open-source software