LLMpediaThe first transparent, open encyclopedia generated by LLMs

Visual Genome

Generated by GPT-5-mini
Note: This article was automatically generated by a large language model (LLM) from purely parametric knowledge (no retrieval). It may contain inaccuracies or hallucinations. This encyclopedia is part of a research project currently under review.
Article Genealogy
Parent: COCO Hop 4
Expansion Funnel Raw 76 → Dedup 0 → NER 0 → Enqueued 0
1. Extracted76
2. After dedup0 (None)
3. After NER0 ()
4. Enqueued0 ()
Visual Genome
NameVisual Genome
Released2016
CreatorsUniversity of Washington; Stanford University; Allen Institute for Artificial Intelligence
DomainsComputer vision; Natural language processing
LicenseVarious (research)

Visual Genome

Visual Genome is a large-scale dataset pairing images with dense region descriptions, object annotations, attribute labels, relationship triples, and question–answer pairs. It was developed to bridge computer vision tasks with natural language processing tasks by enabling joint reasoning about objects, attributes, and relations in images. The project involved contributors from research groups affiliated with University of Washington, Stanford University, and the Allen Institute for Artificial Intelligence, and it has been widely used in studies comparing models across tasks such as object detection, image captioning, and visual question answering.

Overview

Visual Genome was introduced to support multimodal research that intersects work conducted at institutions like Microsoft Research, Google Research, Facebook AI Research, and DeepMind. It complements datasets such as ImageNet, COCO (dataset), and Pascal VOC by emphasizing dense annotations and linguistic structure. The release aimed to facilitate benchmarks for projects connected to competitions such as the ImageNet Large Scale Visual Recognition Challenge and workshops at conferences like CVPR, ICCV, and NeurIPS.

Dataset Composition

The dataset contains images sourced in part from collections associated with Flickr and other publicly available image repositories, annotated with objects, attributes, and relationships. Each image houses multiple region descriptions akin to captions used in MSCOCO studies and links to crowdworkers from platforms similar to Amazon Mechanical Turk for human annotation. Annotated elements include bounding boxes that align with tasks tackled by algorithms in literature emerging from teams at Berkeley AI Research (BAIR), MIT CSAIL, and Carnegie Mellon University. Visual Genome also provides question–answer pairs comparable to datasets like VQA (dataset) and CLEVR for reasoning experiments.

Annotation Schema and Methodology

Annotations follow a schema combining object labels, attribute tags, and relationship triples (subject–predicate–object), echoing formalisms used in semantic resources like WordNet for lexical grounding and inspired by graphical representations used in projects such as Scene Graphs research. The methodology relied on crowd-sourced judgments, ontology alignment, and normalization steps similar to practices used by teams at Wikimedia Foundation projects and corpora curated at Linguistic Data Consortium. Quality control methods mirrored adjudication strategies employed in corpora like Penn Treebank and employed heuristics informed by prior work from groups at Google Brain and OpenAI.

Applications and Benchmarks

Visual Genome has been used to train and evaluate models in areas including object recognition advances from labs at Oxford University and ETH Zurich, image captioning systems developed at Salesforce Research and Adobe Research, and multimodal transformers from Google DeepMind and Facebook AI Research. Benchmarks built on the dataset have influenced architectures like Faster R-CNN, YOLO, and multimodal encoders similar to those in BERT-based vision–language models. It has supported downstream tasks in research led by teams at Stanford AI Lab and University of Oxford and spurred evaluation protocols in workshops at ECCV and AAAI.

Limitations and Criticisms

Critiques of the dataset echo concerns raised by reviewers of other prominent corpora compiled by institutions such as OpenAI and Google Research: annotation inconsistency, label imbalance, and cultural bias tied to source image distributions from platforms like Flickr. Scholars affiliated with Harvard University and Yale University have highlighted issues in reproducibility and the need for clearer licensing, mirroring debates around datasets like IMDB (dataset) and legal discussions involving Creative Commons. Methodological limitations include noisy relation labels and varying granularity compared to curated resources like COCO (dataset) and tightly controlled synthetic datasets such as CLEVR.

Comparable and complementary resources include ImageNet, COCO (dataset), Open Images (dataset), PASCAL VOC, ADE20K, SUN (database), Visual Question Answering (VQA), Flickr30k, RefCOCO, Visual7W, List of datasets for machine learning research, Conceptual Captions, SBU Captioned Photo Dataset, WikiArt Dataset, Kinetics (dataset), AVA (dataset), Cityscapes dataset, KITTI (dataset), ADE20K, and synthetic benchmarks like CLEVR and ShapeNet. Research tools and annotations interoperable with Visual Genome concepts have been produced by entities such as LabelMe, COCO-Stuff, Detectron2 from Facebook AI Research, and evaluation suites originating from groups at NIST and Stanford NLP Group.

Category:Computer vision datasets