The Center for Data Innovation recently spoke with Karen Nguyen, CEO of OFFENSAI, a Delaware-based company specializing in autonomous red-teaming that tests security vulnerabilities in companies’ cloud infrastructure. Nguyen explained how OFFENSAI uses AI-driven models to simulate cyber attacks, generate new data about attacker behavior, which defenses fail, and what security teams need to fix before a breach.
David Kertai: What problem is OFFENSAI solving?
Karen Nguyen: Most cloud environments use multiple security tools to block threats and alert teams to suspicious activity. The problem is that these alerts are often inaccurate or misleading, forcing security teams to spend hours investigating routine events, like software updates or employees logging in from new locations, that turn out to be harmless. Meanwhile, real threats can blend in with normal activities and evade detection.
Traditional security testing, such as periodic penetration tests, are slow and costly, so most organizations run it only once or twice a year. That leaves long stretches where new vulnerabilities can emerge. OFFENSAI addresses that gap with a platform built around two AI-driven red-teaming models that continuously analyze cloud environments and produce real-world evidence of exploitable risk, exposing critical vulnerabilities and showing the actions needed to fix them. The Automated Attack Path Discovery engine uses an AI model to identify and safely test the exact steps a real attacker could use to infiltrate a cloud environment, while the Evasion Engine uses an adaptive AI model to perform the attacks in different ways, changing timing, traffic patterns, and techniques, to test whether defenses detect them.
Kertai: How does your Automated Attack Path Discovery engine work with existing cloud infrastructure?
Nguyen: We deploy the models directly inside a customer’s cloud environment, where they operate as a contained adversary. This ensures the models can only act within the same boundaries a real attack would encounter. The Automated Attack Path Discovery engine starts from the perspective of a compromised account, assuming an attacker has already gained an initial foothold, and then explores the environment using only the identities, permissions, and configurations that actually exist. This allows the AI model to learn how risk emerges from real configurations rather than theoretical assumptions.
The Automated Attack Path Discovery engine then maps cloud resources, permissions, and relationships, identifies weak points, and determines whether those weaknesses can be chained together—meaning access gained from one flaw enables exploitation of another—into a viable attack path. For example, it might start with an exposed storage bucket—a cloud-based repository for files and sensitive data—then move to an overly permissive identity role, and escalate privileges to reach sensitive data. The model outputs structured data and visualizations that show these steps end to end on our platform, giving security teams a clear picture of how an attacker could move through their environment.
Kertai: How does your Evasion Engine use the attack paths your first model discovers to test a company’s defenses?
Nguyen: After the Automated Attack Path Discovery engine identifies the attack paths, the Evasion Engine takes those steps and tests how they would appear during normal day-to-day cloud activity. Using AI to adapt execution in real time, it blends its actions into routine operations and tracks how those behaviors show up in monitoring systems. This lets teams see data-backed evidence of how their security tools respond and whether they can detect subtle, evasive behavior rather than only obvious attacks.
The Evasion Engine highlights which steps trigger alerts, which go unnoticed, and how far an intruder could move inside the environment, giving teams a realistic, measurable view of how their defenses perform under real-world conditions.
Kertai: What insights does the user gain from your red-teaming process?
Nguyen: Users gain an attacker-level perspective of how their cloud environment could be compromised and the real impact of those breaches. They see how small misconfigurations, such as excessive permissions or unsecured resources, can combine into serious risks, how far an intruder could move laterally, and which systems or data an attacker could ultimately reach.
Through our platform, the two models also provide decision-relevant insights rather than just findings, including actionable remediation guidance. For instance, a team might discover that a single misconfigured identity role allows access to multiple production databases and can lock down that role immediately, removing several attack paths at once. OFFENSAI generates compliance-ready reports and tracks measurable changes over time, such as reductions in viable attack paths or improved detection rates, enabling security, compliance, and leadership teams to prioritize fixes based on observed risk rather than theoretical exposure.
Kertai: How do you ensure your models stay up-to-date with the latest attack techniques?
Nguyen: Our research team continuously develops proprietary attack scenarios based on real-world incidents, threat intelligence feeds, and emerging cloud exploitation techniques. We feed this data directly into the models so they evolve alongside attacker behavior, ensuring the system reflects current threats rather than static assumptions. This keeps OFFENSAI aligned with how attackers actually operate and ensures organizations are tested against the most relevant vulnerabilities.


