Home BlogDataset Teaching Computers to Understand Human Actions

Teaching Computers to Understand Human Actions

by Joshua New
by
human actions

Google has published a dataset of short film clips containing “atomic visual actions (AVA),” which are distinct human actions such as walking or drinking. The AVA dataset consists of links to 57,600 three-second YouTube videos of 80 actions performed from different angles, as well as annotations about the actions performed and the number of human actors in each clip. This dataset could help researchers develop computer vision systems capable of recognizing actions in video, rather than just classifying the contents of the frame.

Get the data.

You may also like

Show Buttons
Hide Buttons