Facebook and the Georgia Institute of Technology have released TextVQA, a dataset of images, questions, and answers, to foster the development of systems that can read text in images and answer questions about the images. The dataset includes more than 28,000 images, 45,000 questions about the images, such as “what is the title of the white book?” and 450,000 answers. Developing AI systems that can read and reason about the text in images could be helpful to visually-impaired individuals.