AVA  Dataset Explore Download Challenge About

AVA is a project that provides audiovisual annotations of video for improving our understanding of human activity. The annotated videos are all 15 minute long movie clips. Each of the clips has been exhaustively labeled by human annotators, and the use of movie clips in the dataset is expected to enable a richer variety of recording conditions and representations of human activity.

We provide the following annotations.

AVA Actions Dataset

The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per human occurring frequently. A detailed description of our contributions with this dataset can be found in our accompanying CVPR '18 paper.

AVA v2.1 is now available for download. It was the basis of a challenge in partnership with the ActivityNet workshop at CVPR 2018.

AVA Spoken Activity Datasets

AVA ActiveSpeaker: associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. A detailed description of this dataset is in our arXiv paper. The labels are available for download here.

AVA Speech densely annotates audio-based speech activity in AVA v1.0 videos, and explicitly labels 3 background noise conditions, resulting in ~46K labeled segments spanning 45 hours of data. A detailed description of this dataset is in our Interspeech '18 paper. The labels are available for download here.

Ready to explore or start using AVA?

For announcements and details on the 2019 challenge, please sign up to the Google Group: ava-dataset-users.

Google Google About Google Privacy Terms Feedback