Skip to main content

Welcome To EGO4D!

Ego4D Goalstep Release

Ego4D GoalStep annotations and grouped videos are release. Please refer to v2.1 updates for more details.

Ego4D v2.0 Update Available

The Ego4D v2.0 update is now publicly available.

EGO4D is the world's largest egocentric (first person) video ML dataset and benchmark suite, with 3,600 hrs (and counting) of densely narrated video and a wide range of annotations across five new benchmark tasks. It covers hundreds of scenarios (household, outdoor, workplace, leisure, etc.) of daily life activity captured in-the-wild by 926 unique camera wearers from 74 worldwide locations and 9 different countries. Portions of the video are accompanied by audio, 3D meshes of the environment, eye gaze, stereo, and/or synchronized videos from multiple egocentric cameras at the same event. The approach to data collection was designed to uphold rigorous privacy and ethics standards with consenting participants and robust de-identification procedures where relevant.

Start Here for instructions on how to access the dataset by accepting the terms of our license agreement.

Watch Here for a youtube introduction to the dataset, tooling and challenges.

Read the paper here for a more complete introduction.

Read about the benchmarks here for details on the specific tasks and annotations.

Explore the data before downloading here (you will first need to accept the license agreement).

Vist our forum or contact us to ask questions, make suggestions or discuss Ego4D or related research.

Latest Posts from the Forum: