This week’s list of top data news highlights covers November 15, 2025 to November 21, 2025 and includes articles on mapping bear encounters in Japan and virtually typing on any common household object.
Researchers at the Allen Institute in Washington have built a digital version of the part of a mouse’s brain that handles sensing and decision-making. They used a supercomputer to run a massive simulation that recreates this brain region in software. The result is a virtual cortex that reproduces many of the same firing patterns and circuit interactions seen in real brains, giving scientists a controlled way to study how healthy circuits work and how diseases like Alzheimer’s and epilepsy disrupt them, without doing risky experiments on live animals.
The Northern Lincolnshire and Goole NHS Foundation Trust, a regional health organization in the UK, is launching an X-ray screening system that uses AI to help emergency clinicians identify fractures and dislocations. The system analyzes each image within seconds and adds annotations that highlight areas with unusual shapes, edges, or densities that may signal an injury. This gives clinicians a faster, clearer way to spot subtle or easily missed injuries during busy emergency shifts.
Researchers at the University of Texas at Dallas have built an AR headset called PropType that lets users type on everyday objects by overlaying a virtual keyboard onto surfaces like bottles, cans, and books. Traditional AR typing involves people typing in mid-air on floating virtual keyboards, which is slow and tiring. PropType fixes this by sensing how a user holds an object and adapts the keyboard layout to match its shape. This gives users a solid surface to press against and avoids the arm fatigue of mid-air typing.
MIT researchers have developed an AI system that makes it easier for beginners to turn simple sketches into 3D objects on a computer. The team recorded more than 41,000 videos of designers clicking, dragging, and choosing tools as they built 3D shapes. After training on these examples, the AI system can take a user’s sketch and handle the on-screen steps needed to turn it into a basic 3D shape. Users can then build on this base to create more complex designs, while the tool automates the repetitive, precise actions that usually take a long time to learn.
5. Robotically Helping with Chores
Sunday Robotics, a startup in California, has built a mobile home robot, called Memo, with two arms that can make coffee, clear tables, and load a dishwasher. To teach Memo these skills, the company trained it on data from human workers who wore special gloves while doing real household chores. The gloves capture how people naturally grasp objects, coordinate their fingers, and move through each step, and Memo learns from these recordings and adapts the motions to its own hands. This gives the robot more realistic, humanlike dexterity and helps it handle everyday mess and irregular objects.
6. Shopping for Target Items on ChatGPT
Target is partnering with OpenAI to let customers browse and buy Target products directly through ChatGPT. Instead of opening Target’s website or app, users can tag Target on the chatbot, describe what they need, and receive specific product suggestions that they can purchase without leaving the chat.
7. Improving Milky Way Mapping
Researchers at RIKEN, Japan’s national science research institute, have built the first digital model of the Milky Way that tracks the motion of individual stars rather than grouping them into coarse bundles. The team used high-performance computing and AI techniques to follow how gravity moves each star and how gas flows through the galaxy, representing more than 100 billion stars as separate points. This level of detail lets astronomers see how events such as supernovae push gas and heavy elements around, turning earlier static sketches of the Milky Way into a dynamic map they can compare directly with telescope observations.
Sophia University researchers in Tokyo have created an AI tool that predicts where people are most likely to encounter bears across 19 regions of Japan. The system analyzes recent bear sightings along with environmental data such as forest cover, road networks, and population density to identify high-risk areas like foothills, river corridors, and isolated valley roads. The team has released an interactive color-coded map that marks each square kilometer from low to very high risk, aiming to help communities prepare for safety as bear attacks hit record levels this year.
South Carolina State University has opened a new VR training lab that lets commercial truck drivers practice dangerous or complex road situations in a safe classroom setting. The system recreates real traffic patterns, blind spots, and pedestrian crossings, and its headset tracks a driver’s eye movements to measure attention in different situations. The goal is to improve safety habits and reduce crash risks across the state.
10. Automating Tractor Driving
John Deere, a U.S. agricultural equipment maker, is testing AI-operated driverless tractors to plow and navigate fields on their own. The system uses cameras, sensors, and GPS to follow along predetermined routes, and farmers can watch the tractor’s progress and start or stop it from their phone.
