Home BlogWeekly News 10 Bits: the Data News Hotlist

10 Bits: the Data News Hotlist

by Joshua New
by
Fruit flies

This week’s list of data news highlights covers July 8 – 14, 2017, and includes articles about an AI system that can spot guns in video and a glove that converts sign language into text.

1. Crowdsourcing Training for Self-Driving Cars

Mighty AI, a company that develops training data for AI developers, is using people with smartphones to annotate imagery to help train computer vision systems for self-driving cars. In exchange for small amounts of money per task, Mighty AI users analyze images of scenes that a self-driving car might encounter and annotate different objects, such as “pedestrian” or “trash can.” At least 10 automakers are working with Mighty AI to get access to the huge amount of training data needed to develop self-driving car systems.

2. Protecting the Earth with AI

Microsoft has launched a new initiative called AI for Earth to spur the development of AI tools that can benefit work on biodiveristy, agriculture, water, and climate change. Microsoft will donate data science tools and training for selected projects. AI for Earth has already launched three projects, including a collaboration with geospatial data company Esri to develop a mapping tool for monitoring natural resources, and a partnership with the United Nation’s Crops Research Institute for the Semi-Arid Tropics in India to develop more efficient agricultural practices.

3. Automatically Spotting Guns in Videos

Researchers at the University of Granada have developed an AI system capable of detecting whenever a gun appears in a video. The researchers trained their system on a variety of video sources, including YouTube videos and action movies such as James Bond films. The system can analyze five frames per second and can spot a gun with 96.5 percent accuracy in real time.  

4. Fighting Terrorists’ Financing with AI

Many large banks have begun to use AI to identify and crack down on efforts to finance terrorism. Terrorism financing can be difficult to detect because it can involve different kinds of transactions across multiple countries and small amounts of money that appear innocuous individually. Banks can use AI, such as the system developed by Pennsylvania-based data science firm QuantaVerse, to identify suspicious patterns in these behaviors and flag anomalies. For example, QuantaVerse helped U.S. authorities break up a drug trafficking ring in Panama after it detected a holding company was repeatedly transferring large amounts of money between its businesses, which QuantaVerse deemed suspicious because it shared similarities with Hezbollah’s money laundering practices.

5. Making AI Better at Collaborating

Google has launched a new project called People + AI Research (PAIR) designed to make it easier for people to understand and work with AI systems. PAIR has already published two visualization tools for machine learning systems to help data scientists more easily identify problems in their training data.

6. Converting Sign Language to Text

Researchers at the University of California, San Diego have developed a sub-$100 smart glove capable of automatically translating a wearer’s American Sign Language (ASL) into digital text. The glove uses sensors to detect when a wearer applies different amounts of force to his or her fingers and thumb, and it has a small mounted computer that can translate the unique force combinations of ASL signs into words and transmit these words via Bluetooth to a smartphone or computer as text.

7. Underwater Robots Can Now Speak the Same Language

Scientists for the North Atlantic Treaty Organization (NATO) have developed the first international common standard for underwater communications, called JANUS. While standards for other types of communication such as cellular networks and Wi-Fi are common, underwater communications typically rely on acoustic signals, which can travel much further underwater than traditional methods. JANUS will allow different kinds of underwater robotic systems and sensors to more seamlessly share data with one another.

8. Watching What Makes Neighborhoods Succeed

Researchers at the Massachusetts Institute of Technology have developed a computer vision system that can analyze images of an urban area and predict how safe people would find them and used this system to identify factors that contribute to a neighborhood’s success or failure over time. The researchers had their system analyze over one million pairs of photographs of neighborhoods taken seven years apart. The analysis revealed that neighborhoods that improved over time had higher populations of highly educated residents, were closer to a city’s business districts, and were closer to other successful neighborhoods. Interestingly, the analysis also found that income levels and housing prices of a neighborhood had no bearing on whether or not a neighborhood would improve.

9. Mapping Behavior in a Fruit Fly’s Brain

Researchers at the research nonprofit Howard Hughes Medical Institute in Virginia have developed a method for mapping a fruit fly’s physical activities, such as walking or wing grooming, to brain activity. Researchers can easily stimulate particular regions of a fruit fly’s brain by adjusting the temperature. The researchers filmed 20,000 16-minute videos of fruit flies moving about as they adjusted the temperature to stimulate different areas of the flies’ brains. Then, the researchers trained an AI system to sift through these videos, totalling 225 days, automatically annotate the presence of 14 different easily-recognizable behaviors, and link certain behaviors to a corresponding change in brain activity. For example, if a fly increased its amount of jumping when the researchers adjust the temperature at a particular time in a video, the system would be able to map that brain activity to a specific behavior change.

10. Understanding What’s Going On In a Neural Network

Researchers at the Massachusetts Institute of Technology have developed a method for understanding how individual neurons in an artificial neural network contribute to an AI system’s analysis. When an artificial neural network analyzes an image, it processes image data into a representation of a concept, such as an object or an activity, called a high-level representation. Computer scientists theorize that neural networks accomplish this either through disentangled representations, in which individual neurons detect patterns correlated to certain representations, or distributed representations, in which groups of neurons work together to identify these patterns. The researchers observed a neural network as it analyzed a curated set of images and were able to identify specific neurons that reacted to specific representations, suggesting that neural networks rely on disentangled representations. This knowledge could eventually help researchers better identify and correct potential biases in their AI systems.

Image: Jake Dykinga, U.S. Department of Agriculture

You may also like

Show Buttons
Hide Buttons