Generated by DeepSeek V3.2| Computer architecture | |
|---|---|
| Name | Computer architecture |
| Field | Computer science, Electrical engineering |
| Inventors | John von Neumann, John Presper Eckert, John William Mauchly |
| Related concepts | Central processing unit, Memory hierarchy, Instruction set, Microprocessor |
Computer architecture. It defines the design, structure, and operational principles of a computer system, encompassing the hardware components and the software that drives them. The field bridges the gap between the abstract requirements of computational problems and the physical realities of electronic circuits. Its evolution is deeply intertwined with advancements in semiconductor technology and theoretical models of computation.
The conceptual foundations were laid by Charles Babbage with his designs for the Analytical Engine in the 19th century. The modern paradigm was crystallized in the 1940s by the von Neumann architecture, a report based on the work at the Institute for Advanced Study and the construction of EDVAC. This model established the stored-program concept, unifying data and instructions in a single memory unit. Subsequent decades saw the rise of commercial mainframes from IBM, notably the IBM System/360, which introduced the concept of a compatible family of computers. The invention of the integrated circuit and the microprocessor, pioneered by companies like Intel with the Intel 4004, revolutionized the field, enabling the personal computer revolution led by Apple Inc. and IBM Personal Computer.
The central component is the central processing unit, which executes instructions fetched from main memory. The memory hierarchy, including cache, RAM, and storage devices like hard disk drives, manages data access speeds and capacities. Input/output subsystems facilitate communication with peripherals such as keyboard, mouse, and display device. The system bus, comprising the address bus, data bus, and control bus, serves as the communication backbone between these components. Other critical elements include the motherboard, power supply unit, and various expansion cards for graphics or networking.
This serves as the abstract interface between software and hardware, defining the set of commands a processor understands. Major types include Complex Instruction Set Computer designs, exemplified by the x86 architecture from Intel and Advanced Micro Devices, and Reduced Instruction Set Computer designs, such as ARM architecture, MIPS architecture, and RISC-V. Key aspects include addressing modes, which define how operands are specified, instruction formats, and the handling of interrupts and exceptions. The Application Binary Interface dictates how compilers generate code for a specific ISA.
Also known as computer organization, this details the implementation of a given Instruction set architecture. It involves the design of datapaths, control units, and execution units like the Arithmetic logic unit. Techniques such as pipelining, pioneered in processors like the Intel 80486, and superscalar execution, used in the Intel Pentium, allow multiple instructions to be processed simultaneously. Out-of-order execution and speculative execution, as seen in designs from IBM and Intel, further enhance performance by dynamically reordering operations based on data availability.
Performance is measured by metrics like Instructions per second, FLOPS, and benchmark suites such as SPECint. A fundamental principle is Amdahl's law, which quantifies the potential speedup from parallelization. Optimization focuses on reducing critical path delays, minimizing memory latency through sophisticated cache strategies, and improving branch prediction accuracy. Power consumption has become a paramount concern, driving research into dynamic voltage and frequency scaling and designs for mobile devices and data centers.
These architectures employ multiple processing elements to perform concurrent computations. Multiprocessing systems, like those from Sun Microsystems and SGI, use several discrete CPUs. The mainstream shift came with multicore processors, such as the Intel Core and AMD Ryzen families, which integrate multiple cores on a single die. Large-scale systems include Massively parallel supercomputers like Fugaku and Frontier, and Graphics processing units from NVIDIA and AMD designed for general-purpose computing on graphics processing units.
These are tailored for specific computational domains. Vector processors, like the historical Cray-1, excel at scientific computing. Field-programmable gate arrays from Xilinx and Intel offer reconfigurable hardware logic. Application-specific integrated circuits provide maximum efficiency for fixed tasks, such as Bitcoin mining. Neuromorphic engineering, pursued by projects like IBM's TrueNorth and Intel's Loihi, mimics the structure of biological neural networks. Quantum computing architecture, explored by IBM, Google, and D-Wave Systems, utilizes qubits and principles like superposition and entanglement.
Category:Computer architecture Category:Computer science