Generated by GPT-5-mini| AMIGOS (dataset) | |
|---|---|
| Name | AMIGOS dataset |
| Type | Multimodal affective dataset |
| Modalities | Electroencephalography, Video, Audio, Physiological signals |
| Creators | Private research groups and universities |
| Released | 2017 |
| License | Academic research |
AMIGOS (dataset) AMIGOS is a multimodal affective dataset created for research on emotion recognition and social signal processing, collected by academic teams associated with universities and research institutes. It supports studies in human-computer interaction, affective computing, and neuroscience, enabling experiments that bridge methods used by labs working with EEG, video, and physiological sensing. The dataset facilitates benchmarking across modalities and aligns with practices used by groups conducting experiments similar to those at institutions like Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, University of Cambridge, and University of Oxford.
AMIGOS provides synchronized recordings of behavioral and physiological responses from participants exposed to affective stimuli, allowing comparisons with datasets produced by consortia such as those from National Institutes of Health, European Research Council, Max Planck Society, Fraunhofer Society, and Riken. The dataset was motivated by research trajectories seen at Google DeepMind, Facebook AI Research, Microsoft Research, IBM Research, and academic labs at University of California, Berkeley, ETH Zurich, University of Toronto, and University College London. AMIGOS has been cited alongside corpora like those developed by MIT Media Lab, Mercury Lab, HUMAINE, and projects funded by DARPA and NSF.
The dataset contains multimodal data modalities commonly used by teams at Princeton University, Yale University, Harvard University, Columbia University, and University of Washington: electroencephalography (EEG), frontal video, audio, and peripheral physiological signals. Participant cohorts are comparable to samples recruited by labs at University of Pennsylvania, University of Michigan, Johns Hopkins University, Purdue University, and University of Illinois Urbana-Champaign. Stimulus sets include short film clips and excerpts similar to materials used in studies at New York University, Northwestern University, Brown University, Duke University, and Cornell University. Each recording session was organized in formats familiar to researchers at SRI International, Siemens Research, Sony CSL, and Nokia Bell Labs.
Data were collected following procedures resembling experimental paradigms used by investigators from Imperial College London, King's College London, University of Edinburgh, University of Manchester, and University of Bristol. Participants provided informed consent in protocols analogous to those overseen by ethical review boards at Johns Hopkins University, Stanford University, Columbia University, Yale University, and Harvard University. EEG acquisition used systems comparable to products from BIOPAC Systems, Brain Products, ANT Neuro, g.tec medical engineering, and Emotiv Technologies. Video and audio capture standards reflect practices at BBC Research & Development, NPR, NHK, ARD, and ZDF. Experimental control software paralleled tools developed at MIT CSAIL, CERN, Los Alamos National Laboratory, and Argonne National Laboratory.
Annotations include self-reported affective ratings and externally computed labels akin to methodologies from research groups at University of Southern California, University of California, San Diego, University of California, Los Angeles, University of Texas at Austin, and Georgia Institute of Technology. Labels cover continuous valence and arousal scales and discrete emotion categories used in studies by Paul Ekman-inspired teams and projects at Affectiva, Tobii Technology, Noldus Information Technology, and Scientifica. Metadata includes demographics and session metadata following standards similar to repositories managed by OpenNeuro, Kaggle, PhysioNet, and IEEE Dataport.
Technical validation procedures mirror quality assurance approaches from laboratories at Bell Labs, Los Alamos National Laboratory, Argonne National Laboratory, Sandia National Laboratories, and Lawrence Berkeley National Laboratory. Signal preprocessing steps include filtering, artifact rejection, and synchronization comparable to pipelines published by researchers at MIT Lincoln Laboratory, Google Research, DeepMind, Microsoft Research Cambridge, and Facebook AI Research Paris. Validation metrics reported resemble those used in benchmark studies by ImageNet-related teams, COCO contributors, LFW authors, IEMOCAP creators, and DEAP dataset maintainers.
AMIGOS has been used in studies on emotion recognition, social computing, and brain-computer interfaces by researchers affiliated with Carnegie Mellon University, EPFL, TU Delft, RWTH Aachen University, and University of Twente. Applications include multimodal fusion, transfer learning, and deep learning models developed at Google Brain, OpenAI, DeepMind, Facebook AI Research, and Microsoft Research. The dataset supports comparative evaluation with corpora from IEMOCAP, DEAP, SEMAINE, RECOLA, and MAHNOB-HCI and has informed algorithmic advances described in publications at conferences such as NeurIPS, ICLR, CVPR, ACM MM, and AAAI.
Limitations echo concerns raised by ethicists and researchers from Oxford Internet Institute, Harvard Berkman Klein Center, AI Now Institute, Center for Humane Technology, and Electronic Frontier Foundation: sample size, demographic diversity, and ecological validity. Privacy and consent issues relate to frameworks discussed at United Nations, European Commission, World Health Organization, UNESCO, and Committee on Publication Ethics. Potential biases and misuse risks align with cautionary guidance from IEEE, ACM, Association for Computing Machinery, National Academies, and Royal Society.
Category:Datasets