Researchers at Georgia Tech and Virginia Tech have launched the 2018 Visual Question Answering (VQA) Challenge, providing training data for participants to compete to develop the best AI system that can answer questions about the contents of images. The VQA dataset consists of over 256,000 images each with at least three questions about their contents, such as “who is wearing glasses,” and “where is the child sitting,” true answers for each question, and three plausible but incorrect answers per question. Participants have until May 20, 2018 to develop their submission.