Panasonic, a Japanese electronics company, and Stanford University’s Vision and Learning Lab have published an image and measurement dataset of videos depicting daily activities that occur inside the home to enable developers to better train AI systems that work in living spaces. The dataset is compiled from multiple sensor data, including 30 hours of video from multi-view cameras and data from heat sensors, and includes annotations that characterize the human actions in each video. The dataset includes 70 different classes of daily activities, such as washing dishes, and 453 individual actions, such as placing dishes on a dish rack.
Image: Rewrite27