This week’s list of data news highlights covers March 9-March 15, 2019, and includes articles about AI detecting whether a bot wrote a passage of text and using AI to predict the cognitive development of infants.
1. Detecting Whether a Bot Wrote Text
Researchers from MIT, IBM, and Harvard University have developed the Giant Language model Test Room (GLTR), an interactive tool that analyzes whether a human or bot wrote a passage of text. GLTR makes predictions by assessing the likelihood a bot would have chosen each word in the passage based on the preceding text. GLTR then color codes each word in a passage to indicate how likely it was to have been written by a bot.
2. Data Analysis Shows China Could Surpass the U.S. in AI Research Publications by 2020
The Allen Institute for Artificial Intelligence used data analysis to predict that China will publish more influential AI research papers than the United States by 2020. The institute uses the number of citations a paper receives to measure its quality, and its research suggests that China will surpass the United States in producing papers ranked in the top ten and one percent of all AI research papers by 2020 and 2025, respectively.
3. Predicting the Cognitive Development of Infants
Researchers from the University of North Carolina used machine learning and MRI brain scans to predict how newborn infants would cognitively develop by age two with 95 percent accuracy. The researchers found that the white matter connections, which are complex networks of nerve fibers that transmit electrical signals, are highly predictive of cognitive development and could help identify children at risk for poor cognitive development.
4. Exhibiting the Gestalt Effect
Researchers from Google Brain have demonstrated that, like human brains, artificial neural networks exhibit the gestalt effect, which is the ability to recognize whole forms from groups of unconnected lines, curves, shapes, or points. The researchers trained neural networks on images of triangles and found that the networks could then accurately recognize triangles in images that only show the corners of triangles, meaning the networks perceived the complete shape.
5. Analyzing How the Kidney and Liver Respond to Toxins
Researchers from Harvard Medical School used machine learning to discover that livers and kidneys respond to chemical toxins in nine distinct ways, including with decreased levels of red blood cells. The researchers used an unsupervised machine learning algorithm to analyze a publicly available dataset of the effects of 160 different chemical compounds, including commons ones such as ibuprofen, in rats. Human patients often have to stop taking medication because of adverse side effects, such as internal bleeding, but this research could eventually help inform treatment strategies such as dosing schedules to reduce the likelihood of adverse effects.
6. Detecting Water Leaks with the Internet of Things and AI
Atlantis Casino Resort Spa in Nevada has combined smart-metering devices and AI to detect water leaks and reduce its water consumption. The system analyzes the resort’s water usage for irregularities, alerting the maintenance staff when it finds a leak, such as when it detected a burst pipe behind the wall of a spa, so staff can quickly address the problem. The system has helped the resort realize nearly $40,000 in savings since last year.
7. Predicting Indicators of Alzheimer’s
Researchers from IBM have developed a machine learning model that analyzes blood samples to predict if a person will display indicators of Alzheimer’s disease in their spinal fluid. People with Alzheimer’s disease display signs of the disease in their spinal fluid before exhibiting traditional symptoms, but finding the biomarkers in spinal fluid can require invasive tests. The researcher’s model predicts the level of amyloid-beta, a peptide in spinal fluid linked to Alzheimer’s, by identifying sets of proteins in blood without ever having to extract spinal fluid.
8. Predicting Trucking Maintenance
NFI Industries Inc., a trucking, logistics, and supply chain company, is using AI to determine when components in its nearly 13,000 tractors and trailers will need adjusting or replacing. NFI collects data from multiple sources, including repair logs, payload weights, and the braking styles of individual drivers. Noodle.ai, a San Francisco-based startup, then analyses this data to help NFI make decisions such as when to adjust brakes or replace filters.
9. Estimating How Much Carbon a Forest Stores
San Francisco startup Pachama has developed technology that uses machine learning to analyze satellite, drone, and LIDAR images to measure forests used for carbon offsets. Measuring the carbon storing potential of a forest typically requires humans to physically measure and count the number of trees in a plot, which can be time consuming. Pachama’s technology analyzes images to estimate the carbon storage potential of forests and verifies the predictions with drones.
10. Using Supercomputers to Keep the Mustang Cool and Fast
Ford used supercomputers to design the optimum shape in terms of aerodynamics and cooling for its new GT500 Mustang, helping the vehicle stay cool while its 700 horsepower V8 engine generates significant amounts of heat. The supercomputer helped find the optimum size of the grille area as well as the ideal placement of cooling components while calculating how air would flow through the grille and affect the car’s wind resistance.
Image: Geran de Klerk