Generated by GPT-5-mini| Kattis | |
|---|---|
| Name | Kattis |
| Developer | Open Kattis |
| Released | 2012 |
| Programming language | Java, Python, C++ |
| Operating system | Cross-platform |
| License | Proprietary (platform); varied for problems |
Kattis
Kattis is an online automated judge and problem-hosting platform used for programming competitions, competitive programming practice, and academic assignments. The service provides an interface for submitting solutions in multiple languages and returns immediate verdicts based on automated judging infrastructure. It is widely adopted by universities, regional contests, and international events for running timed contests and maintaining a searchable problem archive.
Kattis offers a web-based system that accepts source code in languages such as C++, Java, Python, Rust, and Go. The platform integrates language toolchains including GCC, Clang, and the OpenJDK runtime to compile and execute contestant submissions. Kattis supports input/output-driven tasks modeled after problems from events like the International Olympiad in Informatics and regional competitions such as the ACM-ICPC World Finals and ICPC World Finals. Its interface includes per-problem statement pages, sample tests, time and memory limits, and automated grading similar to systems used by Topcoder, Codeforces, and AtCoder.
The platform originated from academic tools developed at institutions that host programming contests, influenced by legacy systems such as the Sphere Online Judge and internal grading tools at universities like KTH Royal Institute of Technology and the University of Waterloo. Over time, it evolved to support modern contest formats and was adopted by organizing committees for events including the North American Invitational Programming Contest and national olympiads. The growth paralleled a surge in competitive programming popularity marked by the rise of platforms like SPOJ and CodeChef, and influenced by competitive programming communities and organizations including ICPC and the International Collegiate Programming Contest network.
Kattis provides a problem submission pipeline that enforces compilation, sandboxed execution, and verdict reporting (Accepted, Wrong Answer, Time Limit Exceeded, Runtime Error, Compilation Error). The judge harnesses containerization and sandboxing technologies inspired by projects such as Docker (software), LXC (Linux Containers), and secure execution practices seen in online systems managed by Google and Facebook. Features include support for multiple test files, special judge programs used in contests like the Google Code Jam, and interactive problem support modeled on interactive problems from ICPC and the IOI. Administrative tools allow contest organizers from organizations like Universities, ACM chapters, and corporate partners to create timed contests, manage teams, and set custom scoring similar to systems used by HackerRank and LeetCode.
Event organizers for regional competitions, national olympiads, and university contests use the platform to host onsite and online contests. It has been integrated into qualifying rounds for competitions like the European Contest circuits and used by teams preparing for the ICPC World Finals. Contest management features include scoreboard freezing, penalty calculation, tie-breaking rules familiar from ACM-ICPC scoring, and support for onsite contest proctoring. The platform’s reliability and immediate feedback make it suitable for rapid round formats found in events run by organizations such as Red Bull hackathons and corporate hiring contests organized by companies like Amazon (company) and Microsoft.
Professors and course staff employ the platform for grading programming assignments in undergraduate and graduate courses at institutions such as Massachusetts Institute of Technology, Stanford University, Princeton University, and technical universities across Europe. Integration into curricula supports programming labs, algorithmic thinking exercises derived from CLRS-style algorithms textbooks, and practical assignments in data structures courses influenced by authors like Donald Knuth and Robert Sedgewick. The archive enables instructors to select vetted problems covering topics such as graph algorithms, dynamic programming, computational geometry, and number theory used in courses affiliated with research groups and departments in leading universities.
The problem archive contains tasks tagged by topics, difficulty, and contest origin. Problems are often drawn from national olympiads, university contests, and public contests hosted by organizations like ICPC regional contests and community-driven events on platforms such as Codeforces. Categorization includes tags for algorithms like Dijkstra's algorithm, Dynamic programming, Minimum spanning tree, and Maximum flow problem; problems also reference classical sources and named problems from competitions like the USACO and the IOI. Many problems include metadata specifying time limits, memory limits, and accepted languages, with some statements originating from problem setters associated with institutions such as ETH Zurich and University of Cambridge.
The system computes per-contest leaderboards using scoring rules configured by organizers: ICPC-style penalty scoring, score-based point systems, and partial scoring for partially correct solutions similar to models used by Google Code Jam and TopCoder Open. It maintains user statistics that aggregate solved problem counts, submission histories, and performance metrics comparable to public profiles on Codeforces and AtCoder. Organizers and researchers can export anonymized logs for analytics and research, enabling studies in competitive programming performance akin to work published by computer science education researchers at institutions like Carnegie Mellon University and Utrecht University.
Category:Online judges Category:Competitive programming platforms