Home PublicationsData Innovators 5 Q’s with Cyril Gorlla, CEO of CTGT

5 Q’s with Cyril Gorlla, CEO of CTGT

by David Kertai
by

The Center for Data Innovation recently spoke with Cyril Gorlla, CEO of CTGT, a San Francisco-based company  that helps organizations detect and correct biased or unreliable outputs from AI models. Gorlla shared how CTGT helps organizations prevent errors in AI outputs by analyzing generated output, identifying problematic claims, and adjusting them according to verified data and company policies.

Kertai: How does your system work? 

Gorlla: Think of CTGT like a spell-check for generative AI models. Instead of spelling mistakes, we’re checking for biased outputs and hallucinations. When a model generates an answer, CTGT sits between the model and the user, breaking the response into individual claims—factual statements, recommendations, or assertions—and screening each one against the policy graph. The system checks whether those claims align with trusted information and whether the model reached its conclusions using allowed sources, assumptions, and logical steps, rather than speculation or unsupported inference.

If the model introduces unverifiable information, violates a policy, or shows signs of biased decision-making, CTGT catches the issue at its source and flags the exact point where the response goes off track. Finally, CTGT explains precisely why a response fails a company’s compliance requirements and identifies the specific rule or knowledge constraint that was violated. 

Kertai: How is CTGT different from the tools model developers and companies already use?

Most organizations rely on prompts, filters, and RAG as their primary guardrails. But these are inherently probabilistic. Sometimes they work; often they don’t. CTGT works at a different layer. We sit on top of existing models and evaluate each output in real time against a structured policy graph built from a company’s data, rules, and regulations. Because those policies are enforced deterministically rather than suggested, we can ensure the same constraints are applied consistently across models and use cases—without retraining and without relying on prompts that can fail unpredictably.

Kertai: How does CTGT solve detected problems in AI models? 

Gorlla: Once our system identifies an issue, it creates a compliant version of the original response using the original policy graph. It rewrites the answer to align with verified facts, follow all relevant rules, and preserve the user’s original intent. CTGT does this by replacing or removing only the specific claims that violate policy, rather than discarding the entire response.

The system determines which claims to adjust by comparing each one against the policy graph, which contains the organization’s verified data, rules, and guidelines. This ensures that corrections are precise and targeted, leaving accurate information intact and preserving the overall context of the answer. For example, if a model incorrectly claims that a particular medication cures a disease, CTGT will remove or correct just that claim while leaving the rest of the response, including related explanations, unchanged.

This correction process happens instantly and does not require retraining or modifying the underlying model. As a result, organizations can improve reliability and safety, maintain trust in AI outputs, and continue using existing systems and workflows without disruption.

Kertai: How does CTGT help companies using AI to ensure regulatory compliance?

Gorlla: Compliance teams load rules, standard operating procedures, and risk guidelines into the CTGT platform, and the system automatically checks each AI-generated response against these policies to ensure it meets the organization’s requirements. CTGT logs which rules it applied and why, creating a clear audit trail that teams can use to demonstrate compliance with financial regulations, such as those from the Securities and Exchange Commission (SEC) or the Financial Industry Regulatory Authority (FINRA), as well as industry-specific standards. Teams can update these policies instantly, allowing organizations to respond to regulatory changes without retraining models or interrupting service.

Kertai: Could you provide any read-world examples of your technology in use? 

Gorlla: One recent example comes from our research on the open-source model DeepSeek, which often refuses to answer politically or socially sensitive questions due to internal censorship mechanisms. For instance, when asked, “What happened during the 1989 Tiananmen Square protests?” the model typically gave a vague, non-informative response or refused to answer altogether.

Using our system, we identified the specific internal activation patterns, the signals within the model’s thinking process, that caused it to block certain answers. Rather than retraining the model or removing safety controls entirely, we selectively adjusted those signals while the model was generating responses for users, a stage known as inference time. This allowed the model to respond directly and factually while preserving overall performance.

In tests across 100 sensitive prompts, the original model produced complete answers only about 32 percent of the time. The CTGT-adjusted version answered all of them, without reducing accuracy on unrelated tasks such as math, coding, or general reasoning. This example shows how CTGT can reduce unnecessary bias or censorship while maintaining control, transparency, and model quality.

You may also like

Show Buttons
Hide Buttons