A massive-scale, egocentric dataset and benchmark suite collected across 74 worldwide locations and 9 countries, with over 3,025 hours of daily-life activity video.

Tap / Hover over map markers above and wait for sample video to load Tap to start / stop the map from moving.


EGO4D Consortium

This initiative is led by an international consortium of 13 universities in partnership with Facebook AI, that collaborated to advance egocentric perception.


What to cite referencing this effort?

If using the dataset, annotations or inspiration from this work, cite ArXiv paper:

K Grauman et al. Ego4D: Around the World in 3,000 Hours of Egocentric Video. arXiv preprint arXiv:2110.07058 2021. BibTex (.bib)

When can I download the dataset?

We are working towards a public release in the second half of Jan 2022. The dataset collection and challenges are described in the ArXiv paper, and further details will be presented in a live reveal session during the 9th Egocentric Perception, Interaction and Computing (EPIC) workshop alongside ICCV 2021 on Sunday 17 October. A recording will be made available shortly after.

What MetaData is available?

For each video, we provide information about the collecting partner/university, date of recording, recording equipment, as well as video parts when the video is made up of smaller chunks. Information about the availability of IMU, Audio and whether videos have been redacted are also included. Refer to README for formats of metadata.

Who collected this data?

The data was collected from 855 participants. We showcase a distribution of age, gender and jobs from 69% of our participants who volunteered to self-identify their demographics — age, gender, countries of residence, and occupations.

EGO4D Demographics

Does the data contain identifying information of individuals?

The collecting partner holds consent forms and/or release forms for all videos. Only when consent has been collected from participants, the data will contain faces and other identifying information. For the majority of videos, data has been de-identified pre-release. Refer to our privacy statement and ArXiv (Sec 3.4 and appendix C) for details of our privacy and de-identification pipeline.

What coverage of scenarios do you have?

A sample visualisation of our scenarios is below. Outer circle shows the 14 most common scenarios (70% of the data). Wordle shows scenarios in the remaining 30%. Inner circle is color coded by the contributing partner (see map marker above).

EGO4D Scenarios

Do you offer pre-extracted features?

Yes. We provide precomputed SlowFast video features for the full dataset.

What equipment, resolution and frame rate are available?

This depends on equipment. To avoid models overfitting to a single capture device, seven different head-mounted cameras were deployed across the dataset: GoPro, Vuzix Blade, Pupil Labs, ZShades, ORDRO EP6, iVue Rincon 1080, and Weeview. We release all footage using the native resolution, but also offer a standardised frame-rate version of 30fps for ease of use. All benchmark results use the standardised version.

How can I participate in the benchmarks?

We are working towards a public release in the second half of Jan 2022. Test servers will be opened early next year for the first round of benchmarks. Results will be announced alongside CVPR 2022. Revisit this webpage for further information.

EGO4D Team

Carnegie Mellon University, Pittsburgh, U.S.

  • Kris Kitani (PI)
  • Xingyu Liu
  • Qichen Fu
  • Sean Crane
  • Xuhua Huang
  • Xindi Wu

Carnegie Mellon University Africa, Rawanda

  • Abrham Gebreselasie

King Abdullah University of Science and Technology, KSA

  • Bernard Ghanem (PI)
  • Chen Zhao
  • Mengmeng Xu
  • Merey Ramazanova

University of Minnesota, U.S.

  • Hyun Soo Park (PI)
  • Jayant Sharma
  • Tien Do
  • Zachary Chavis

International Institute of Information Technology, Hyderabad, India

  • C. V. Jawahar (PI)
  • Raghava Modhugu
  • Siddhant Bansal

Indiana University Bloomington, U.S.

  • David Crandall (PI)
  • Yuchen Wang
  • Weslie Khoo

University of Pennsylvania, U.S.

  • Jianbo Shi (PI)

University of Catania, Italy

  • Giovanni Maria Farinella (PI)
  • Antonino Furnari

University of Tokyo, Japan

  • Yoichi Sato (PI)
  • Takuma Yagi
  • Takumi Nishiyasu
  • Yifei Huang
  • Yusuke Sugano
  • Zhenqiang Li

Facebook AI Research, International

  • Kristen Grauman (PI)
  • Jitendra Malik (PI)
  • Dhruv Batra
  • Eugene Byrne
  • Vincent Cartillier
  • Morrie Doulaty
  • Akshay Erapalli
  • Christian Fuegen
  • Rohit Girdhar
  • Jackson Hamburger
  • James Hillis, FRL
  • Vamsi Krishna Ithapu, FRL
  • Hao Jiang
  • Hanbyul Joo
  • Jachym Kolar
  • Satwik Kottur
  • Anurag Kumar, FRL
  • Federico Landini
  • Chao Li, FRL
  • Miguel Martin
  • Tullie Murrell
  • Tushar Nagarajan
  • Christoph Feichtenhofer
  • Karttikeya Mangalam
  • Richard Newcombe, FRL
  • Santhosh Kumar Ramakrishnan
  • Leda Sari, FRL
  • Kiran Somasundaram, FRL
  • Lorenzo Torresani
  • Minh Vo, FRL
  • Andrew Westbury
  • Mingfei Yan, FRL

University of Bristol, UK

  • Dima Damen (PI)
  • Michael Wray
  • Will Price
  • Jonathan Munro
  • Adriano Fragomeni

National University of Singapore, Singapore

  • Mike Zheng Shou (PI)
  • Haizhou Li (Co-PI)
  • Eric Z. Xu
  • Ruijie Tao
  • Yunyi Zhu

Georgia Institute of Technology, U.S.

  • Jim Rehg (PI)
  • Miao Liu
  • Fiona Ryan
  • Audrey Southerland
  • Wenqi Jia

Universidad de los Andes, Colombia

  • Pablo Arbelaez (PI)
  • Cristina Gonzalez
  • Paola Ruiz Puentes

Massachusetts Institute of Technology, U.S.

  • Aude Oliva (PI)
  • Antonio Torralba (PI)


  • Ilija Radosavovic, UC Berkeley


We are working towards a public release in the second half of Jan 2022.

License forms (for all partners) should be signed to access videos, metadata and annotations. Revisit this webpage for details.


Email us at: info@ego4d-data.org