AVA  Dataset Explore Download About

AVA Dataset

The AVA dataset densely annotates 80 atomic visual actions in 57.6k movie clips with actions localized in space and time, resulting in 210k action labels with multiple labels per human occurring frequently. The main differences with existing video datasets are: (1) the definition of atomic visual actions, which avoids collecting data for each and every complex action; (2) precise spatio-temporal annotations with possibly multiple annotations for each human; (3) the use of diverse, realistic video material (movies). Our goal is to accelerate research on video action recognition.

AVA v1.0 is now available for download. We are preparing to release AVA v2.0 soon, which will include more densely sampled annotations and person links. Details can be found at our recent arXiv paper update.

Ready to explore or start using AVA?

Google Google About Google Privacy Terms Feedback