Researchers from the University of Alicante in Spain have released RobotriX, a dataset of 512 sequences of actions that robots performed in 16 virtual rooms, to help AI solve robotic vision problems such as object detection. To generate the data, human operators used virtual reality to control the robot agents as they performed actions, such as grasping a bowl, in the simulated environments. For each of the nearly 8 million frames in the dataset, the researchers provide high-resolution images, depth maps, and 2D and 3D bounding boxes around household items such as sinks, bags, and shelves.
Building Better Robots
previous post