This week’s roundup of data news highlights from April 25, 2026, to May 1, 2026, featuring an AI system capable of reconstructing faces from ancient human remains, and new humanoid robots at Tokyo’s Haneda Airport that assist with loading cargo onto planes.
1. Generating Ancient Images
Archaeologists at Pompeii’s museum in Italy have created an AI‑generated reconstruction of a man who died during the eruption, offering a clearer visual interpretation of a recently uncovered skeleton. Using an AI system to recreate facial features from bone structure, posture, and burial context, the system creates a lifelike image. The result helps researchers visualize victims more accurately and gives the public a more tangible connection to ancient lives.
2. Improving Airport Efficiency
Japan Airlines has started testing humanoid robots to support ground‑handling work at Tokyo’s Haneda Airport. The robots are remotely operated to mimic human movements, allowing them to load and unload cargo containers in tight, hazardous spaces. The system is meant to ease labor shortages, with future plans for tasks like cabin cleaning and operating ground‑support equipment, while safety‑critical duties remain with human staff.
3. Finding Mines Faster
U.S.-based mineral exploration company Earth AI has built an AI-driven system that speeds up the search for critical minerals, such as lithium and copper, across the American West. The platform combines geological modeling, satellite analysis, and autonomous drilling data to predict where to find valuable deposits. Its models improve as new samples arrive, helping geologists pinpoint potential mining locations faster.
4. Practicing Pronunciation
Google Translate has launched a new feature that helps users practice and refine pronunciation in real time. The tool uses speech-recognition models to analyze how closely a user’s spoken words match native-speaker patterns, providing sound-level feedback for each phrase. It also evaluates rhythm, stress, and articulation, helping learners spot errors quickly and build more natural pronunciation through guided practice.
5. Designing Molecules
Researchers at the Federal Technology Institute of Lausanne in Switzerland have built an AI system that translates natural-language prompts into new molecular structures. The model learns chemical rules from millions of known compounds, then generates molecules based on written descriptions of desired properties. This allows chemists to design candidates for drugs and materials more quickly while exploring a wider range of possible molecular combinations.
6. Accessing Medical Databases
Seattle-based health-data company Truveta has created an AI chatbot, Truveta Intelligence, that gives users insights from a large collection of U.S. clinical data. The system uses large language models to analyze billions of de-identified medical records, summarizing trends, outcomes, and patient characteristics. It links conditions, treatments, and demographics across datasets, helping scientists explore ideas faster and uncover patterns that would take months to find manually.
7. Exploring Shipwrecks
The French Navy has conducted a mission using a deep-diving robot to capture new images of a 16th-century shipwreck 4,000 meters deep. The robot combines high-resolution cameras, sonar, and autonomous navigation software that maps the seafloor in real time while stabilizing itself under extreme pressure. Its onboard AI system analyzes terrain, adjusts movement and lighting, and identifies points of interest, allowing archaeologists to document the fragile wreckage safely.
8. Predicting Cardiac Arrests
Clinicians at the University of Pennsylvania have built an AI model that predicts cardiac arrest hours before it happens. The system processes continuous streams of patient data, including vital signs, lab results, and doctors’ notes, and learns patterns that often come before cardiac collapse. Its predictive engine updates risk scores in real time, helping clinicians identify at-risk patients earlier and intervene before emergencies occur.
9. Making Siri Smarter
Apple plans to introduce a new Siri-powered mode in the iPhone’s Camera app with iOS 27, bringing visual intelligence directly into the camera view. The system uses on-device AI to analyze what the camera sees, identifying objects, extracting text, and interpreting details so Siri can act instantly. This allows users to scan cards, save information, and ask ChatGPT questions about their surroundings.
10. Retrieving Radioactive Waste
Germany-based engineering firm Bilfinger has partnered with research institute Fraunhofer IOSB to develop a robotic system for handling nuclear waste. The system uses AI-guided robotic arms with cameras and force sensors that measure pressure and resistance, allowing precise movement in hazardous conditions. This enables robots to open aging waste containers, sort debris, and safely repackage materials in areas too dangerous for humans.


