This week’s list of data news highlights covers May 25-31, 2019, and includes articles about an AI system that can detect ghostwriting and volunteers using data to find a missing person in Hawaii’s Makawao Forest Reserve.
Researchers from Brown University have developed an AI model that enables robots to write handwritten words in several languages, even if the researchers did not train the model on the written language. The researchers trained the system on a dataset of Japanese characters, and the system has two models, one which ensures the robots stroke is aiming in the right direction and another that helps the robot decide when to move to the next stroke. The researchers found that the robot could then use machine vision to examine words in English, Greek, and Hindi and then write them.
Researchers from the University of Copenhagen have developed an AI system called Ghostwriter that can detect if a particular individual wrote something with 90 percent accuracy. The researchers trained and tested the system on a combined 130,000 writing assignments that 10,000 different high school students submitted. The system identifies discrepancies in a person’s writing style, such as their typical word length, sentence structure, and word usage, by comparing writing against previously submitted writing.
3. Predicting a Patient’s Risk of Dying
Danish researchers have developed an algorithm that can predict patients’ risks of dying in the hospital within 30 and 90 days of admission. The researchers trained the algorithm on the data of more than 230,000 intensive care unit patients, and the algorithm uses this data as well as data from measurements and tests taken in the first 24 hours of a patient’s admission to make predictions. The researchers found that diagnoses of up to 10-years-old could affect predictions.
4. Teaching AI to Identify Objects by Touch
Researchers from MIT have developed a glove with 550 sensors that can help AI systems learn to identify objects by touch. The researchers wore the glove while handling 26 different objects, including glasses, a tennis ball, and a mug, helping a neural network learn to identify the objects with 76 percent accuracy and estimate the objects’ weights within 60 grams. Similar gloves usually use fewer sensors and cost thousands of dollars, but the researchers’ glove uses only $10 worth of material.
Researchers in Pakistan at the University of Swabi and Comsats University Islamabad, as well as researchers in Bangladesh at Jessore University of Science and Technology, have developed an AI system that can track traffic flow. The system uses cameras to identify vehicles based on their contours and colors, which allows the system to determine the location, speed, and direction of vehicles. The system could help governments understand the causes of congestion and accidents.
6. Using Echolocation to Detect if a Person Has Fallen
Researchers from the Wuhan University of Technology in China have developed an AI system that can analyze sound waves to detect if a person is sitting, standing, walking, or falling. The system emits ultrasonic sounds, listens for echos using microphones, and then analyzes the reflected sound, which has different pitches depending on the position of objects. The system could be helpful in senior communities to detect individuals who have fallen.
DeepMind has developed AI agents that can work in teams to consistently beat other teams at Quake III Arena, a 3D first-person video game. It is more challenging to train AI agents to work in groups than as an individual, and DeepMind trained the agents in parallel using reinforcement learning on 450,000 games of Capture the Flag, a classic video game. The agents developed strategies such as outnumbering opponents at crucial moments and waiting near the enemy base until a flag appeared.
8. Finding a Missing Person with Data
Volunteer rescuers used data to find Amanda Eller, a hiker who was lost in Hawaii’s Makawao Forest Reserve for sixteen days. The volunteers used GPS data to log where they had already searched, revealing areas they still needed to visit. In addition, the volunteers used a data analysis of where missing persons are often located to learn that Eller was likely near a waterfall, which is where the volunteers found Eller.
Researchers from Microsoft and Zhejiang University in China have developed an AI system that can generate text from speech using only 20 minutes of voice samples and matching transcription. Similar systems usually require significantly more data, and the researchers use the data generated from a text-to-speech model, which transforms a “text y” into “speech x,” and an automatic speech recognition model, which turns a “speech x” into “text y,” to iteratively train the other model, which helps the system maintain accuracy while using fewer data. More than 99 percent of the words the system speaks are intelligible.
10. Using a Person’s Voice to Guess Their Appearance
Researchers from MIT have developed a neural network that can create an image of what a person might look like by analyzing an audio clip of the individual speaking. The researchers trained the network on millions of Internet videos, including videos from YouTube, helping the network learn correlations between a person’s voice and face. The network can create images that align with a speaker’s age, gender, and ethnicity.