This week’s roundup of data news highlights from March 21, 2026, to March 27, 2026, and features a robot that guides travelers through airports and an AI system that detects early signs of landslides to help protect communities when they occur.
Researchers at the University of Canterbury in Australia have created an AI-powered wildfire prediction tool that analyzes satellite imagery, weather data, and vegetation conditions, to identify fire risks earlier than traditional methods like ground patrols. The system processes incoming information to flag emerging hotspots and predict how fires may spread across areas, giving emergency teams time to plan evacuations, deploy resources, and protect communities.
U.K.-based Parkstone Yacht Club has partnered with local police to launch Moor Mesh, a security app that links directly to the club’s cameras and alerts boat owners and officers when it detects suspected theft or unauthorized access. The system tracks motion around docked boats and sends real-time video clips and location-based alerts to both the app and police, enabling faster responses and stronger marina security.
U.S.-based drone company Brinc has built a drone called Guardian that serves as a more efficient alternative to police helicopters. The aircraft streams high-resolution video over Starlink satellite internet, which provides connectivity in remote areas, allowing officers to deploy it within minutes and monitor emergencies from afar. Its onboard sensors and autonomous flight features give first responders real-time situational awareness, enabling faster emergency response.
4. Helping Travelers Find Flights
U.S.-based robotics company Inbot has partnered with San Jose International Airport to deploy an autonomous customer-service robot called Jose that helps travelers navigate terminals and find gates or services. The robot uses speech recognition, mapping, and real-time sensor data to answer questions and guide passengers. It also alerts airport staff to issues such as passengers needing assistance, improving efficiency and reducing wait times.
Spotify has created a new feature called SongDNA that lets listeners explore connections behind their favorite tracks. The tool analyzes songwriting credits, samples, reused portions of other songs, production links, and shared collaborators to show how artists and genres intersect. By surfacing these relationships in an interactive interface, SongDNA helps users discover new music and trace how sounds evolve over time.
Researchers at the University of Melbourne in Australia have created an AI system that detects early warning signs of landslides and avalanches by analyzing satellite radar images that can detect extremely small shifts in the Earth’s surface. The model identifies tiny shifts in soil and rock that humans cannot see and learns patterns of instability by comparing new data with past examples, allowing it to spot abnormal movement long before a slope fails.
7. Expanding Voice and Video Searching
Google has announced a global rollout of an AI-powered feature called Search Live that lets users point their phone camera at objects, such as landmarks, and ask real-time questions. The system continuously analyzes the live video feed, combines visual cues with spoken queries, and understands what the user is seeing. It generates conversational answers that update instantly as the scene or question changes, helping users get information without typing or switching apps.
8. Finding Cancer Care Quicker
Utah-based health tech company Healthtree Foundation has created an AI platform that helps cancer patients find treatments by analyzing their medical history, genetic data, and past clinical outcomes, meaning how similar patients responded to care. The system compares each patient’s profile with thousands of similar cases, identifies patterns, and generates personalized treatment options, including relevant clinical trials.
Engineers at Massachusetts Institute of Technology have built a wrist-worn device that translates arm and hand movements into precise robotic actions. The system uses muscle-sensing electrodes, small sensors placed on the skin that detect muscle activity, along with motion data to interpret intended gestures in real-time. Its AI model learns each user’s patterns and enables more intuitive, hands-free control for manufacturing, logistics, or assistive robotics.
Researchers at the University of Essex in the U.K. have built an autonomous fruit-picking robot that uses AI-powered vision to identify ripe strawberries and harvest them without damaging plants. The system analyzes color, shape, and position to decide which berries are ready. It processes images to separate fruit from leaves, estimate ripeness, and guide a soft hand-like gripper to the exact picking point with millimeter-level precision.
