This week’s list of data news highlights covers May 5-11, 2018, and includes articles about a new landslide database and an AI assistant that can have realistic conversations over the phone.
1. Predicting the Danger of Salmonella
Researchers at the Wellcome Sanger Institute, a genomics research institution in the United Kingdom, have developed a machine learning system capable of predicting the level of danger associated with different strains of Salmonella bacteria. The researchers trained their system on a dataset of sequenced DNA from multiple Salmonella strains known to produce different symptoms, allowing it to identify 200 genes that indicate if a strain is more likely to give a person regular food poisoning or Typhoid fever. This system could allow public health officials to better understand the severity of public health risks caused by Salmonella outbreaks.
2. Crowdsourcing a Global Landslide Database
NASA has launched a citizen science project called Landslide Reporter to encourage members of the public to submit information about landslides they witnessed or saw on the news, such as their location, time, and size, as well as photos of the landslide. NASA will then review each report and use this data to populate the Cooperative Open Online Landslide Repository (COOLR), which NASA hopes to establish as the largest global landslide database. NASA researchers hope to be able to use COOLR data to improve landslide prediction algorithms, which could improve prevention efforts.
3. Making Genetic Sequencing Standard Practice
Geisinger Health System has announced that it will incorporate genetic sequencing as part of routine patient care, making it the first healthcare organization in the United States to do so. Geisinger will first conduct a 1,000-patient pilot to determine how to best sequence patient DNA and incorporate this data into electronic health records. Geisinger will use this data to better determine the risk of certain cancers and cardiovascular disease, which it estimates will improve treatment for up to 15 percent of its patients.
4. Teaching Chatbots to Deal with Uncertainty
AI startup Gamalon has developed a method to improve how AI-powered chatbots process uncertainty in language. The complexity and ambiguity of language can make it challenging for even the most advanced chatbots to interpret what a person means when they say something, limiting the utility of chatbots for certain applications. Gamalon’s technique combines natural language processing with probabilistic techniques to give a chatbot the ability to make an educated guess about what someone actually means. This method also allows chatbots to have a conversational memory, allowing users to reference previously discussed topics without being explicit.
5. Modeling Living Human Cells
Researchers at the scientific research nonprofit Allen Institute have developed the first-ever predictive 3D model of a live human cell, which it calls the Allen Integrated Cell. There are a variety of techniques for studying the inside of cells, however they usually alter or damage the cells scientists want to study, limiting their value. The researchers trained an AI system on thousands of images of live human stem cells, some of which were altered to make their structures more visible, allowing it to study an image of a new cell and create an accurate 3D model of its internal structure. This model could allow researchers to better understand the factors that influence why a cell might become diseased and identify what kinds of treatments might be effective.
6. Monitoring Your Sodium intake in Real Time
Researchers at Georgia Tech have developed a flexible sensor that can be mounted on a retainer and monitor a wearer’s sodium intake in real time. The sensor monitors the presence of sodium ions in a wearer’s mouth and shares this data via Bluetooth with a smartphone. The sensor is accurate for measuring changes in sodium levels in saliva caused by liquids, but solid foods can cause spikes if a piece of food touches the sensor. However, the researchers believe that they can calibrate the monitoring software to filter out sudden spikes or dips to create more accurate readings.
7. Teaching AI to Navigate Like the Brain
Researchers at DeepMind have developed an AI navigation system that uses an artificial neural network that resembles the architecture of neural structures in the brain that aid in navigation. Animal brains use a process known as path integration to determine how to move through a space, which scientists believe relies on structures known as grid cells. After training their system on examples of routes that mice used to navigate a maze, the researchers found that their system was better at navigating than other similar systems and that its artificial neural network resembled the structure of grid cells.
8. Open Data Could Save Lives in the Philippines
An initiative run by the University of the Philippines called the National Operational Assessment of Hazards, or Project NOAH, is using open data to improve disaster response and mitigation efforts. Last year the university took over Project NOAH, which was initially started by the national government in 2002, and it has developed a platform that combines 2,000 networked water-level sensors, satellite and weather data, and historical disaster data. Project NOAH has created publicly accessible high-resolution flood, landslide, and storm maps for 70 percent of the country.
9. Spotting Autism with a Baby Monitor
A smartphone app called ChatterBaby, developed by neuroscience researchers at the University of California, Los Angeles, is using AI to interpret infant’s cries and potentially identify signs of autism. ChatterBaby analyzes changes in frequency and pattern in infant cries to serve as a rudimentary translator and is capable of distinguishing between cries that indicate hunger, pain, and other basic feelings. ChatterBaby users can also complete surveys about their infant’s activity that could indicate early signs of autism, such as gaze aversion, and ChatterBaby pairs this data with recordings of their infant’s cries. ChatterBaby’s developers will use this data to develop a machine learning system that will attempt to find audio clues linked with autism.
10. AI Could Make Phone Calls For You
Google has developed an AI tool called Duplex that can make phone calls and conduct natural-sounding conversations to complete simple tasks, such as scheduling a haircut. Duplex works with Google Assistant, its automated assistant software, but unlike other virtual assistants, Duplex can talk with realistic speech features, such as raising pitch at the end of a question and including phrases like “um,” “gotcha,” and others that make it sound like a human speaker.
Image: Orange County Archives.