Generated by GPT-5-mini| SPECweb99 | |
|---|---|
| Name | SPECweb99 |
| Developer | Standard Performance Evaluation Corporation |
| Release | 1999 |
| Genre | Web server benchmark |
| Platform | Unix, Windows |
SPECweb99
SPECweb99 is a standardized web server benchmark designed by the Standard Performance Evaluation Corporation to measure HTTP server performance under realistic workloads. It was created to provide vendors, researchers, and procurement officers with repeatable comparisons across hardware from Sun Microsystems, IBM, Hewlett-Packard, and software stacks like Apache and Microsoft IIS. The suite influenced subsequent benchmarking efforts by organizations such as the Transaction Processing Performance Council and research labs at universities including Stanford and Carnegie Mellon.
SPECweb99 evaluates HTTP/1.0 and CGI-like interactions using synthetic clients and server-side components to simulate mixed static and dynamic content delivery. The benchmark emphasizes throughput and response-time metrics for web-serving systems produced by vendors such as Sun Microsystems, IBM, Hewlett-Packard, Dell, Compaq, Oracle Corporation, Fujitsu, NEC Corporation, Cisco Systems, Intel Corporation, Advanced Micro Devices, Microsoft, Red Hat, Novell, SUSE, Apache Software Foundation, FreeBSD Project, NetBSD Foundation, OpenBSD, Google, Yahoo!, AOL, eBay, Amazon (company), Facebook, Twitter, LinkedIn, Netflix, Dropbox, VMware, Citrix Systems, Canonical (company), Alibaba Group, Tencent, Baidu, Xerox, HP Enterprise, Lenovo, Hitachi, Toshiba, Motorola, Broadcom, Marvell Technology Group, ARM Holdings, Qualcomm, Samsung Electronics, LG Electronics, Sony, Panasonic, Siemens, Philips.
Development began in the late 1990s amid growth in commercial deployments by companies such as Amazon (company), eBay, Yahoo!, AOL, Microsoft, and research groups at Stanford University and Massachusetts Institute of Technology. The Standard Performance Evaluation Corporation assembled contributors including engineers from Intel Corporation, IBM, Sun Microsystems, Hewlett-Packard, Red Hat, Oracle Corporation, and academics from Carnegie Mellon University, University of California, Berkeley, Georgia Institute of Technology, Cornell University, University of Cambridge, University of Oxford, Princeton University, University of Illinois Urbana–Champaign, University of Washington, University of Texas at Austin, University of British Columbia, École Polytechnique Fédérale de Lausanne, National Institute of Standards and Technology, and testing labs at TUV Rheinland. The resulting specification sought to improve on prior efforts such as benchmarks used by Netcraft, SPECweb96 contributors, and internal suites at Oracle Corporation and Microsoft Research.
SPECweb99 defines client emulation, server workload, and measurement procedures with strict calibration and compliance rules influenced by practices at IEEE, IETF, W3C, and IAB. Test harnesses use driver machines to emulate HTTP clients and a central controller to coordinate runs similar to distributed testbeds at Lawrence Berkeley National Laboratory and Los Alamos National Laboratory. The methodology references traffic patterns observed by operators at AOL, Yahoo!, Google, Amazon (company), and content providers like CNN, BBC, The New York Times, The Guardian, Reuters, Bloomberg L.P., The Wall Street Journal, Financial Times, NPR, CBS News, NBCUniversal, ABC, Fox Broadcasting Company, and MTV Networks.
Workloads include mixes of static objects, dynamic CGI-like responses, and SSL/TLS-like encryptions representative of deployments by Bank of America, JPMorgan Chase, PayPal, Visa Inc., Mastercard, American Express, and webmail services such as Gmail and Hotmail. Key metrics are simultaneous sessions, HTTP transactions per second, and response-time percentiles (e.g., 95th percentile) which service providers like Verizon Communications, AT&T, T-Mobile, Vodafone, Orange S.A., Deutsche Telekom, NTT Communications, SoftBank Group, and content delivery networks such as Akamai Technologies, Fastly, Cloudflare used for capacity planning. The benchmark's scoring favored sustained throughput under quality-of-service constraints similar to metrics used by SPEC subcommittees and performance groups at Intel Corporation and AMD.
Implementations ran on operating systems and web servers from Microsoft, Red Hat, Novell, SUSE, FreeBSD Project, Solaris (operating system), HP-UX, AIX (operating system), and were deployed by vendors including Sun Microsystems, IBM, Hewlett-Packard, Dell, Oracle Corporation, Fujitsu, NEC Corporation, Lenovo, Cisco Systems, VMware, Citrix Systems, Canonical (company), Amazon Web Services, Google Cloud Platform, Microsoft Azure, and research clusters at Lawrence Livermore National Laboratory. Enterprise customers such as Goldman Sachs, Morgan Stanley, Citigroup, Deutsche Bank, Goldman Sachs, Barclays, UBS, HSBC, Royal Bank of Scotland used results to inform procurements and architecture choices. Academic users at Stanford University, MIT, Carnegie Mellon University, UC Berkeley, Princeton University used SPECweb99 for comparative studies in systems research and publications at conferences like USENIX, ACM SIGCOMM, IEEE INFOCOM, ACM SOSP, ACM OSDI, ACM ASPLOS, IEEE/ACM SC.
Published results compared hardware from Intel Corporation, AMD, IBM, Sun Microsystems, Hewlett-Packard, Fujitsu, NEC Corporation and software stacks like Apache Software Foundation, Microsoft IIS, Nginx (software), Lighttpd across metrics used by vendors such as Dell and Lenovo. SPECweb99 influenced later benchmarks including suites by Transaction Processing Performance Council and internal benchmarks at Google and Facebook; it shaped capacity planning practices at content delivery networks Akamai Technologies and Cloudflare, and guided procurement at large organizations like Walmart, Target Corporation, Best Buy, Home Depot, The Home Depot, IKEA, Siemens, General Electric, Honeywell International, 3M, Johnson & Johnson, Procter & Gamble, Unilever, Pfizer, Merck & Co., GlaxoSmithKline, Novartis, Roche, Bayer AG. The benchmark's legacy persists in academic citations, vendor whitepapers, and the evolution of web performance engineering practice documented at venues such as ACM SIGMETRICS and IEEE Transactions on Computers.
Category:Benchmarks