10 Bits: the Data News Hotlist
This week’s list of data news highlights covers February 24 – March 2, 2018, and includes articles about a smart mouthguard that can detect concussions and a new algorithm that can let AI learn from hindsight.
Legal AI software company LawGeex have developed an AI system capable of outperforming human lawyers at evaluating legal contracts. Working with law professors from Stanford University, Duke University, and the University of Southern California, LawGeex had its system compete with 20 experienced lawyers to analyze five non-disclosure agreements in four hours, scoring the system and lawyers based on how accurately they identified 30 distinct legal issues. On average, the human lawyers took 92 minutes to complete the test and were 85 percent accurate, while the AI system took just 26 seconds and was 95 percent accurate.
A company called Prevent Biometrics has developed a sensor-laden mouthguard for athletes capable of detecting impacts that could mean a user receive a concussion. The mouthguard uses accelerometers to monitor the direction and force of an impact and a proximity sensor to ensure it is securely in place on a wearer’s teeth, allowing it to be significantly more accurate than helmet-mounted sensors which do not directly measuring skull impact. The mouthguard wirelessly transmits impact data via Bluetooth to an app and can flag impacts above a certain force threshold, prompting a medical examination to determine if a player is concussed.
The California Department of Motor Vehicles has announced that it will no longer require autonomous vehicles to have a human in the driver’s seat prepared to take over in licensed tests, allowing for the testing of fully driverless vehicles. California, despite being home to the majority of the autonomous vehicle industry, has imposed significant restrictions on the testing of the technology on public roads, causing many companies to move testing to other states with more permissive regulations. The updated rule will go into effect on April 2, 2018.
Airbus, working with the Space Administration of the German Aerospace Center, has developed an AI system called the Crew Interactive Mobile Companion (CIMON) to serve as an automated assistant for the crew of the International Space Station (ISS). CIMON, which is housed in a medicine-ball-sized shell to allow it to fly freely aboard the ISS, uses face and voice recognition to interact with crew and provide information about technical procedures. CIMON will also gather data about complex activities performed on ISS as well as interactions with crew members, which could provide insight into how to reduce the psychological stress of long space missions.
Hyundai has developed autonomous driving software capable of navigating traffic circles, also known as roundabouts, which can be particularly challenging because they involve sharp turns, lane changes, and can lack the straightforward traffic signals of other kinds of intersections. Hyundai’s system relies on cameras, four radar units, and six LIDAR sensors to keep track of every vehicle in a 100 meter radius, allowing it to safely make its way through roundabouts.
Zoos around the world are helping to develop a system called the Zoological Information Management System (ZIMS), developed by conservation nonprofit Species360, by populating it with veterinary and husbandry data about their animals to make it easier to find them mates. ZIMS logs the medical histories, origins, and behaviors of each animal, enabling zookeepers to find suitable mates for animals that would produce healthy offspring, even if the animals live in different zoos, making it particularly useful for rare species.
Startup Viz.ai has developed AI software that can analyze brain scans of emergency room patients, detect blockages in brain blood vessels, a common sign of a stroke, and prompt a specialist to intervene. Strokes occur when blood supply to the brain is interrupted, and the longer it takes to restore blood supply, the more damage a brain can suffer. The U.S. Food and Drug Administration (FDA) approved Viz.ai’s system for use in February 2018 using a new regulatory classification designed to make it easier to bring triage tools to market.
AI research nonprofit OpenAI has developed an algorithm called Hindsight Experience Replay (HER) that enables an AI system to learn from past failures during training, similar to how humans learn from hindsight. AI systems that use reinforcement learning will attempt to complete tasks through trial and error, repeating and building off of actions that result in positive feedback until it achieves the desired goal. HER allows a reinforcement learning system to retroactively interpret negative feedback from a failed attempt as positive feedback toward achieving that particular outcome, as if that was the system’s goal all along, enabling the system to learn more as it trains.
Startup Augean Robotics has developed a computerized wheelbarrow robot called Burro that drive itself around a farm to help workers cart away produce. Burro can detect and recognize farmworkers and can be instructed to follow a worker as they move throughout a field or trained to continuously drive along a specified looped path to serve as a makeshift automated conveyor belt.
The U.S. Department of Veterans Affairs (VA) has partnered with DeepMind to develop machine learning tools that can predict health risks while a patient is in a hospital to improve patient care. Patient deterioration in hospitals accounts for 11 percent of inpatient deaths globally, due in part to certain conditions such as acute kidney injury that can be fatal if a hospital fails to detect and treat it early. DeepMind will analyze 700,000 anonymized VA health records to identify the most common signs of risks that could result in patient deterioration.
Image: Art G.