This week’s list of data news highlights covers October 24, 2020 – October 30, 2020 and includes articles about using deep learning to find the best treatments for tumors and spotting signs of a stroke with machine learning.
1. Detecting Solar Flares to Find Habitable Planets with AI
Researchers at the University of Chicago and the University of New South Wales in Australia have trained a neural network to detect flares that erupt from stars. Since stellar flares can incinerate the atmospheres of planets forming nearby, scientists must search for habitable planets that surround cooler stars with fewer stellar flares. Currently, astronomers look for flares through a time-consuming manual process of measuring the brightness of stars by eye. Using neural networks, researchers are able to discover flares more efficiently. For instance, the AI system was able to discover more than 23,000 flares from 3,200 stars.
2. Finding the Best Treatments for Tumors with Deep-Learning
Researchers at the University of California San Diego School of Medicine have developed DrugCell, a deep-learning algorithm that analyzes tumor data and recommends the best drug-treatment response. The researchers trained DrugCell on the responses of 1,200 tumor cell lines, which are cultures of animal cells, to 700 FDA-approved and experimental therapeutic drugs, creating more than 500,000 cell line-and-drug pairings. Using DrugCell, researchers can input data about a tumor and obtain information on the best-known drug for the tumor, the biological pathways that control the body’s response to the drug, and the best combinations of drugs to treat tumor malignancy.
3. Accelerating COVID-19 Drug Discovery with Deep-Learning
Researchers from Michigan State University have repurposed deep learning models to focus on a specific SARS-CoV-2 protein called the coronavirus main protease, an enzyme the virus uses to make copies of itself. By creating drugs that disable the main protease, researchers can prevent the virus from replicating itself without disrupting people’s natural biochemistry. Since the main protease of the SARS-CoV-2 protein is nearly identical to that of the virus responsible for the 2003 SARS outbreak, researchers used information from drug developers about which chemical compounds interfered with the 2003 SARS virus protein’s function. The deep learning model used this information to predict and rank over 100 known chemical compounds, helping coronavirus drug developers save time and money by narrowing down their list of candidates.
4. Spotting Signs of a Stroke with Machine Learning
Researchers from Pennsylvania State University and Houston Methodist Hospital have created a machine learning tool that spots signs of a stroke from video recordings. To build and train the tool, researchers used an iPhone to record 80 patients who were experiencing stroke symptoms, such as sagging muscles and slurred speech, as they performed a speech test. The tool then used computational facial motion analysis and natural language processing to learn to identify these symptoms. When tested, the tool accurately diagnosed strokes 79 percent of the time, which is on par with doctors diagnosing strokes using CT scans. Additionally, the tool has a four minute turnaround, providing doctors with a clinical advantage as a delayed diagnosis can cause patients to lose more neurons.
5. Training Machine Learning Models Without Writing Code
Microsoft has previewed Lobe, a free app that helps users train machine learning models to classify images without needing to write any code. Users import and label images of what they want the AI system to classify, for example labelled pictures of different types of Californian plants, and the app will then select a suitable open-source machine learning model for the dataset and start training the model on the user’s device. Users can also review the model’s performance through real-time visual results, offer feedback on its predictions, and correct inaccurate labels. So far, early users of Lobe have built apps that identify harmful plants, send people alerts when they have left their garage door open, and detect beehive invaders such as wasps.
6. Repairing Potholes Using an AI Robot
The University of Liverpool in the United Kingdom has partnered with Robotiz3D, a company that builds detection and repair systems for robots, to build autonomous robots that will use AI to identify and fix potholes in U.K. roads. To date, pothole repairs have cost U.K. taxpayers over £1 billion, but autonomous robots are offering a cost-effective fix by detecting potholes earlier and fixing them before they get bigger. To detect road defects, the robots will autonomously patrol roads without the need for road closures, and identify road defects such as cracks and potholes, characterize their shape, collect measurements of the defect, and capture images to send to local authorities. Local authorities will then assess the extent of the pothole and determine whether to send in a physical repair team or have the robot emit quick-drying asphalt to repair it.
7. Improving the Control of Robot Arms with AI
A team of researchers at Stanford University have developed a joystick for the robotic arms fitted on wheelchairs that adults use to assist with everyday tasks such as brushing their teeth. Typically, these robotic arms have six or seven joints that enable different ranges of motion, but to control each joint, users must switch between different modes on their joystick, which can be unintuitive and mentally tiresome. To address this, researchers have created a joystick that gives commands in only two directions (up or down; left or right) that can still control a multi-jointed robot smoothly and quickly. First the researchers trained neural networks to encode various task-specific robotic motions into two directions. Then they trained a different set of algorithms to predict a user’s desired action, for example predicting which cup a user is reaching for when faced with two cups. Together, these two sets of algorithms allow a person to give a robot two-dimensional instructions on a joystick and perform complex, context-dependent actions.
8. Detecting Password Sprays with Machine Learning
Microsoft has developed a machine learning algorithm that detects password sprays, a cyberattack where a malicious actor uses bots to attack thousands of communication networks using a few commonly-used passwords, rather than numerous passwords for a single user. Because these attacks occur with uneven consistency, it is difficult for organizations to distinguish between intentional attacks and typical user error. To address this, the algorithm better identifies attacks by checking features such as the reputation of the communication environment based on past security incidents, unfamiliar login properties, and other account deviations. When tested, the model had a 98 percent password spray detection accuracy rate.
9. Predicting Cellular Responses with Machine Learning
Researchers at Stanford University School of Medicine have developed a machine learning model that uses a patient’s symptoms to predict their immune profiles, which are profiles that characterize the cellular signals a person’s body will produce in response to changes in their immune system. Immune profiling on large groups of patients provides insight that can inform the development of more effective medical treatments, but gathering data from many patients is time-consuming and expensive. The model works by predicting a new patient’s immune profile based on a set of profiles it has already trained on. Various immune features, such as whether a patient has been vaccinated, can affect a prediction so the researchers have trained the model to select features that have strong predictive value and relevance. This allows the researchers to increase the accuracy of immune profiling without needing additional patients. For example, the AI system increased the accuracy of modeling immune profiles for patients with gum disease.
10. Analyzing Photos and Their Photographers with AI
Researchers at AU Engineering in India, Aarhus University in Denmark, and Tampere University in Finland have used neural networks to identify 23 well-known Finnish photographers based on the content of photos they took during the Second World War. Developing an AI system that can automatically distinguish objects, people, and photographers on the basis of characteristics in an image can serve as a tool for providing content-based textual descriptions of public photographic archives, which the 2020 European Union Accessibility Directive now requires. To train the tool, researchers used 160,000 photographs captured between 1939 and 1945 and found that because some photographers have more distinct and recognizable image characteristics than others, the AI tool identified some photographers more easily. On average, the AI system accurately identified photographers 41 percent of the time.
Image: Christina Victoria Craft