LLMpediaThe first transparent, open encyclopedia generated by LLMs

ResNet

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Expansion Funnel Raw 78 → Dedup 19 → NER 17 → Enqueued 9
1. Extracted78
2. After dedup19 (None)
3. After NER17 (None)
Rejected: 2 (not NE: 2)
4. Enqueued9 (None)
Similarity rejected: 2
ResNet
ResNet
LunarLullaby · CC BY-SA 4.0 · source
NameResNet
Introduced2015
DevelopersMicrosoft Research
Key paper"Deep Residual Learning for Image Recognition"
AuthorsKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Main useImage recognition, feature extraction
ArchitectureDeep residual convolutional networks
Notable awardsImageNet Large Scale Visual Recognition Challenge results

ResNet is a family of deep convolutional neural networks introduced in 2015 that enabled much deeper architectures by using residual connections to ease optimization. It transformed practice across computer vision and influenced work in natural language processing, robotics, and medical imaging by allowing architectures with hundreds of layers to converge reliably. The original formulation and successors have been widely cited, incorporated into frameworks, and adapted by both academic groups and industry labs.

Overview

ResNet arose from research at Microsoft Research and was presented at the IEEE Conference on Computer Vision and Pattern Recognition showing state-of-the-art performance on ImageNet Large Scale Visual Recognition Challenge tasks. The method addresses vanishing gradients observed in deep models developed by researchers at University of Toronto and engineers building on architectures such as AlexNet, VGG, Inception (neural network), and earlier work by teams at Google. It quickly influenced implementations in libraries from Facebook AI Research, Google Research, and OpenAI and was deployed in products from Microsoft, Amazon Web Services, and Apple.

Architecture

ResNet architectures stack convolutional blocks that include identity-based skip connections, a design inspired by highway networks by researchers at Stanford University and earlier residual-like ideas from labs at Carnegie Mellon University. Typical models include variants named by depth, such as 18, 34, 50, 101, and 152 layers, reflecting work by authors who built upon techniques from Geoffrey Hinton’s group and others at University of Toronto. The basic residual block for shallower networks uses two 3x3 convolutions and identity shortcuts; the bottleneck block for deeper networks uses 1x1, 3x3, 1x1 convolutions, similar to designs in He et al.. Architectures were implemented in frameworks like Caffe, TensorFlow, PyTorch, and used pretrained weights circulated by groups at Model Zoo efforts and repositories maintained by researchers at Berkeley AI Research.

Training and Optimization

Training ResNet models employed techniques from optimization and regularization advanced by communities around Yann LeCun, Andrew Ng, and Yoshua Bengio. Key practices include stochastic gradient descent with momentum, weight decay inspired by work at University of Montreal, batch normalization popularized by a team at Google Brain, and initialization schemes related to research by Xavier Glorot and Kaiming He. Large-scale training used datasets like ImageNet and COCO (dataset), with augmentation strategies similar to those used by teams at Microsoft and Facebook AI Research. Distributed training across clusters from vendors such as NVIDIA and cloud providers like Google Cloud Platform and Amazon Web Services enabled training of deep variants via libraries from Horovod and implementations in MXNet.

Variants and Extensions

Numerous variants extended ResNet concepts: Wide Residual Networks widened channels, ResNeXt introduced grouped convolutions favored by researchers at Facebook AI Research, and DenseNet proposed dense connectivity at Cornell University and Georgia Tech-affiliated labs. Other extensions include Pre-activation ResNet modifications by the original authors, Squeeze-and-Excitation Networks from teams at Imperial College London and Tongji University, and hybrid architectures combining ResNet backbones with attention modules from Google Research and DeepMind. ResNet principles also informed architectures in language models developed at OpenAI, vision transformers from Google Research, and segmentation networks from ETH Zurich groups.

Applications

ResNet backbones are used across applications developed by groups at institutions like Stanford University, Massachusetts Institute of Technology, and Johns Hopkins University for tasks such as object detection with frameworks like Faster R-CNN, semantic segmentation in Mask R-CNN, medical image analysis by teams at Mayo Clinic and Harvard Medical School, face recognition systems built by companies like Face++, and robotics perception stacks at Boston Dynamics. ResNet variants power features in consumer products from Microsoft Azure, Apple, and Google Photos and research prototypes from labs at MIT CSAIL and CMU Robotics Institute.

Performance and Benchmarking

On benchmarks like ImageNet Large Scale Visual Recognition Challenge and COCO (dataset), ResNet models set new baselines upon release, outperforming predecessors such as VGG and models from GoogleNet groups. Performance comparisons were evaluated by teams at Stanford DAWN Project and in leaderboards maintained by Papers with Code and competitions hosted by Kaggle. Hardware-specific optimizations leveraged accelerators from NVIDIA (GPUs), Google (TPU), and specialized ASICs from Intel and ARM, with throughput and latency trade-offs analyzed by industry groups at Dell EMC and IBM Research.

Limitations and Criticisms

Critiques of ResNet-guided designs include computational and energy costs highlighted by researchers at University of California, Berkeley and environmental analyses from groups at University of Massachusetts Amherst. Concerns over brittleness to adversarial examples were raised by teams at Cornell and NYU, and interpretability issues were discussed in workshops organized by NeurIPS and ICLR. Scaling to extremely deep variants led to diminishing returns explored by researchers at Microsoft Research and Google Brain, prompting alternative compact architectures from MobileNet authors at Google and efficiency-focused work at DeepMind.

Category:Deep learning architectures