The Center for Data Innovation recently spoke with Ben Luria, CEO of Hirundo, an Israel-based company that builds high-resolution model editing tools to address hallucinations, bias, and security flaws in large language models (LLMs). Luria discussed how the company’s platform uses a redaction-based approach to identify and erase unwanted behaviors from AI systems to make them more accurate and trustworthy.
David Kertai: What is the difficulty with fixing problems in trained AI models?
Ben Luria: The core challenge is that once a model is trained or fine-tuned, it’s costly and time-consuming to selectively remove specific information or behaviors. It’s like trying to forget a single memory; once learned, the knowledge is deeply woven into the neural network’s structure. For businesses, this creates real risks: models can retain private or copyrighted material or exhibit harmful biases. Moreover, beyond biases, embedded information can also resurface unpredictably, leading to hallucinations or opening security vulnerabilities. Just as human memories can unconsciously shape behavior, hidden model knowledge can be misused or exploited in harmful ways.
However, at Hirundo we have developed a solution to this problem. Our Machine Unlearning platform quickly and effectively erases specific information from LLMs. We can remove personally identifiable information, confidential knowledge, and toxic behaviors without retraining, making the given model more accurate, trustworthy, and compliant.
Kertai: How is Hirundo’s approach different from others?
Luria: Most of the industry tries to improve models by adding new data or building external guardrails. Our approach is different, we focus on redaction rather than addition. Using what we call a “neurosurgical” method, our engine pinpoints where unwanted information or behaviors are encoded in the model’s weights and vectors, then surgically alters these specific values, effectively erasing them from the model’s memory without affecting the rest of its knowledge.
This process reduces hallucinations, biases, and vulnerabilities by more than 50 percent in a short processing time. Unlike earlier unlearning research, which often caused collateral damage or wasn’t scalable, our work delivers a repeatable, production-ready solution that preserves the model’s overall utility.
Kertai: Who gets the most value from using Hirundo?
Luria: We bring the most value to teams working on mission-critical, high-risk, or regulated AI systems. Enterprise LLM and data science teams use us to reduce risks like hallucinations and privacy violations, while Responsible AI and AI Safety teams rely on us to minimize organizational risk. We also work with frontier AI labs that spend significant time on post-training fixes, our platform shortens their iteration cycles and improves outcomes.
Kertai: What measurable improvements does machine unlearning deliver?
Luria: The key difference is that Hirundo changes the model itself, not just its outputs. Guardrails and filters act like external firewalls that can be bypassed, but our Machine Unlearning platform rewires the internal representations that cause hallucinations, biases, or jailbreak vulnerabilities.
This has delivered enterprise-grade results: up to 85 percent fewer jailbreak vulnerabilities, more than 55 percent fewer hallucinations and biased responses, and stronger overall model stability. By repairing the faulty “mental wiring” inside the model, we make AI systems more reliable, compliant, and aligned with business goals, without retraining.
Kertai: How should business leaders think about AI unlearning?
Luria: Think of machine unlearning as the Men in Black neuralyzer, erasing only the memories and reflexes you don’t want. In practice, it transforms AI development from a blunt process of trial-and-error fixes into a more controlled adjustment, making models safer, more reliable, and easier to align with evolving requirements. It also allows teams to respond quickly when new risks emerge. Over time, this creates AI systems that remain adaptable and trustworthy as business, regulatory, and security needs evolve. Ultimately, laying the foundation for AI that can keep pace with society’s expectations, scaling responsibly without sacrificing safety or control.