This week’s list of top data news covers December 13, 2025 to December 19, 2025 and includes articles on robots that build physical objects from text descriptions and deep-learning models that predict how cells change shape.
1. Predicting Cell Behavior During Early Development
Researchers at MIT have developed a deep-learning model that predicts how individual cells move, fold, divide, and rearrange during the earliest stage of fruit fly development. The model analyzes high-resolution video of embryos and learns geometric patterns such as cell position, shape, and contact with neighboring cells. Using this information, it can forecast minute-by-minute cell behavior with about 90 percent accuracy during the first hour of development. The researchers now plan to apply this approach to other species and human tissues to reveal early disease patterns, including those linked to asthma and cancer.
2. Understanding the Sun’s Surface with AI
Researchers at the University of Hawaiʻi Institute for Astronomy have developed an AI tool that creates accurate 3D maps of the Sun’s magnetic field. The tool starts with observations from the Daniel K. Inouye Solar Telescope, which measure the magnetic field in the Sun’s atmosphere but leave uncertainties about its direction and height. The AI tool helps fill in the gaps by combining those observations with basic physical laws that govern how magnetic fields behave. This allows scientists to determine the true direction and height of magnetic structures and better predict solar activity that can affect Earth.
3. Building Objects by Describing Them
Researchers at MIT and partner institutions have developed an AI-driven robotic system that can build multi-component objects, such as chairs and shelves, directly from simple text descriptions. The system translates a user’s prompt into a 3D design, reasons about the object’s shape and intended use, and determines how prefabricated parts should be arranged. A robotic arm then physically assembles the object from reusable components. The system makes physical design faster, more intuitive, and accessible.
Princeton engineers have developed a superconducting qubit that preserves quantum information far longer than current designs by tackling one of quantum computing’s core limitations: energy loss caused by imperfect materials. Using tantalum metal, which withstands aggressive fabrication without developing defects, and pairing it with ultra-high-quality silicon, the researchers minimized the microscopic flaws that normally disrupt qubit stability. As a result, the qubit maintains coherence—its ability to hold information—up to 15 times longer than many existing processors.
5. Amplifying Voices with Meta’s AI Glasses
Meta has begun rolling out a software update for its AI-powered smart glasses that adds a new conversation-focused audio mode. In noisy settings like restaurants and bars, the glasses use AI and microphones designed to pick up sound coming from straight ahead to amplify the voice of the person the wearer is listening to, rather than all surrounding noise. The AI system processes the incoming audio in real-time, identifying speech patterns and filtering out competing sounds so the intended speaker’s voice remains clear and intelligible.
6. Modeling Materials in Extreme Hypersonic Flight
Scientists at Sandia National Laboratories have developed a computational model that predicts how the protective materials that keep high-speed vehicles from overheating will behave during hypersonic flight. The model learns from lab tests that show how materials heat up and wear away, and from flight data that shows how those materials behave in real high-speed conditions. By linking these two sets of data, the model can estimate how a material will perform before it is ever flown. This allows engineers to quickly compare and improve heat-shield designs
7. Detecting Tire Failure on Trucks
Utah’s Department of Transportation has deployed an AI-powered in-road tire monitoring system at truck ports of entry to detect tire problems. The system uses in‑road sensors and machine learning to identify unusual patterns, such as uneven pressure distribution or abnormal vibrations, and alerts inspectors to examine those tires more closely during routine checks. UDOT intends to expand the technology to all ports of entry to strengthen highway safety and reduce preventable incidents involving heavy freight vehicles.
8. Converting Live Sports Footage Into 3D Game Data
Peripheral Labs is using AI and computer vision to turn standard sports video into a 3D digital view of the game. The system analyzes footage to understand where players are on the field, how they move, and how their bodies are positioned at each moment. This makes it possible to view plays from any angle, follow a single player, or pause and examine key moments.
9. Making Smart Home Devices Work Across Brands
Ikea has launched a new line of smart home products built around Matter, a shared smart-home standard that allows devices from different brands to work together. Matter provides a common communication system so smart lights, sensors, plugs, and remotes can exchange information instead of operating in separate apps or closed systems. Each device sends simple signals—such as motion detected, lights turned on, or energy being used—into a shared network that other devices can respond to. For Ikea, this makes its smart home products easier to combine with non-Ikea devices, expanding their usefulness and reach.
10. Turning Emergency Calls Into Medical Records
A regional emergency medical service in southern England is testing an AI tool inside its emergency call centers. In these call centers, clinicians speak with patients by phone to assess symptoms and decide what care is needed. The AI software records those phone conversations and automatically turns them into structured medical notes, which the clinicians can then review and approve. The goal is to reduce time spent writing documentation after calls so clinicians can handle more patients and focus on decision-making rather than paperwork.
