This week’s list of data news highlights covers November 14, 2020 – November 20, 2020 and includes articles about predicting traffic with a supercomputer and reconstructing the origins of the Milky Way using neural networks.
1. Identifying a Drug to Treat Elderly People with COVID-19
Biotechnology startup BenevolentAI in London, has used AI to identify baricitinib, a drug traditionally used to treat rheumatoid arthritis, as a potential treatment for elderly people with COVID-19. The startup used AI software to parse through research papers for drugs that could block the COVID-19 infection process, and discovered that baricitinib could stop the infection from entering lung cells. During clinical trials, the drug reduced mortality by 71 percent and the U.S. Food and Drug Administration has recently authorized the drug for emergency use to treat patients hospitalized with COVID-19.
2. Simulating Combustions Faster with AI
Cerebras Systems, a U.S. computer systems company, has developed an AI supercomputer that can solve large simulations at new speeds. In a simulation of combustion in a coal-fired power plant involving over 500 million variables, the supercomputer completed the calculations in 6 microseconds, which is equal to 6 millionths of a second.
3. Predicting Traffic with a Supercomputer
Researchers from the Argonne National Laboratory (ANL), a research center for the U.S. Department of Energy, have trained a machine learning model to better predict highway congestion based on traffic speed data captured by sensors stationed along the California highway system. Previous machine learning models could only process data streaming in from up to 300 sensors, but using ANL’s supercomputer, researchers are able to process data captured by more than 11,000 sensors. When tested, the model accurately predicted traffic speeds within 6 miles per hour of observed speeds.
4. Reducing Road Accidents with Machine Learning
Transport for New South Wales (TfNSW), an Australian transport and roads agency, has partnered with Microsoft to identify potentially dangerous intersections and reduce road accidents with machine learning. TfNSW used 50 vehicles to generate data about 5 intersections on vehicle speeds, braking, acceleration, and vehicular lateral movement over the span of 10 months. By applying machine learning to analyze the data, TfNSW was able to compare driving patterns to existing crash investigation data which revealed that two of the five intersections are dangerous because they have vehicle blind spots and should undergo modifications to make them safer.
5. Reconstructing the Origins of the Milky Way with Neural Networks
Astrophysicists at the University of Heidelberg in Germany have used neural networks to reconstruct the origins of the Milky Way. To do this, the team first analyzed the characteristics of star clusters that orbit the Milky Way, such as their age, chemical composition, and orbital motion. The team then trained a neural network to analyze how the clusters of stars within each host galaxy merged together and discovered that the Milky Way is made up of approximately 20 galaxies with over 110 million stars combined.
6. Moderating Facebook Posts with Machine Learning
Facebook has developed machine learning algorithms to prioritize which posts moderators review. Previously, moderators reviewed posts based on when they were reported, but now, moderators will review posts according to how the algorithm ranks them based on their virality, severity, and violation of community rules. The algorithm ranks each post based on whether the subject of an image, caption, or header includes harmful content, such as child exploitation, terrorism, or self-harm.
7. Diagnosing Tinnitus from Brain Scans Using AI
Nonprofit medical research organization Bionics Institute in Australia, have used AI to diagnose tinnitus, a condition that causes people to hear buzzing or ringing sounds. The researchers presented visual and auditory stimuli to patients with and without tinnitus and used brain scan imaging techniques to measure the blood flow and oxygen levels of active brain regions, such as the frontal lobe. Using these measurements and self-reported information from patients about the severity of the stimuli, the AI system accurately spotted the presence of tinnitus 78 percent of the time. The AI also distinguished between mild and severe forms of tinnitus with 87 percent accuracy.
8. Aiding Patient Diagnoses for Adverse Childhood Experiences
Researchers from Oak Ridge National Laboratory, a U.S. government-funded research lab, have developed an AI system to assist professionals diagnosing and treating patients who are suffering from conditions caused by adverse childhood experiences, such as abuse or neglect. Because effective diagnosis can involve asking thousands of questions pertaining to the nuances of an individual case, the researchers are improving the process by first extracting relevant patient information and inferring what interventions would work best using an AI system that works like a chat bot. For example, the AI system can process input such as “my home has no heating” into inferences about housing instability and search through research on related adverse experiences to suggest the best course of action.
9. Decoding Brain Signals with Machine Learning
Researchers from the University of Southern California have developed a machine learning algorithm that can determine which brain signals control certain functions, such as walking and breathing. The researchers trained the algorithm to decode neural patterns in brain signals that corresponded to a singular behavior from analyzing images of brain scans. The U.S. Army is currently piloting the algorithm to detect behaviors that indicate a soldier is experiencing stress or fatigue.
10. Predicting How Skin Cancer Patients Will Respond to Drug Treatments
Researchers from New York University’s School of Medicine have developed a machine learning algorithm that determines how well a patient with metastatic melanoma, an advanced-staged skin cancer, will respond to tumor-suppressing drugs. The team trained the algorithm using the images of 302 tumor tissue samples and a patient’s medical history, such as the severity of their condition and how they have responded to an immunotherapy regimen. When tested, the algorithm predicted a patient’s response to immunotherapies with 80 percent accuracy.
Image: John Howard