Researchers at the Massachusetts Institute of Technology’s IBM Watson AI Lab have published a collection of one million labeled video clips called the Moments in Time Dataset to help train AI systems to identify and understand actions in videos. Each video clip is three seconds long and depicts people, animals, objects, and other natural phenomenon in a dynamic scene.
Teaching Machines to Understand What’s Going On In Videos
previous post