The Center for Data Innovation recently spoke with Ron Kerbs, CEO of Kidas, a company that uses AI to flag risky or harmful messages exchanged during online gameplay, helping protect young gamers from potential threats. Kerbs discussed how the company keeps pace with evolving language patterns and the role of human oversight in reviewing flagged content.
David Kertai: What online safety problem is Kidas trying to solve?
Ron Kerbs: Online gaming is a major part of kids’ lives today, but the chat features that come with them, especially voice and text, are often left unmonitored. With our platform ProtectMe, we’re changing that. Too often, parents, educators, and even gaming platforms underestimate the risks children face online: cyberbullying, hate speech, grooming, scams, and exposure to explicit content. We built ProtectMe to fill this gap. It runs in the background and flags dangerous interactions in real-time, allowing parents, schools, and esports coaches to take action, without interrupting gameplay.
Kertai: How does your AI system help protect kids while they play games online?
Kerbs: ProtectMe does far more than scan for inappropriate words. Our multi-layered AI monitors both live voice and text chats during gameplay, analyzing not just what’s said, but how it’s said, factoring in tone, behavioral patterns, and context. It evaluates in-game situations, speech cadence, and emotional cues to assess risk more accurately, allowing us to detect harm in real-time.
For example, if a child mentions their school or age to someone significantly older, our system flags it as a potential grooming or privacy concern. This is not just surface-level monitoring, it’s designed to recognize subtle, high-risk interactions. We’ve also developed a domain-specific language called Sappa to help our AI system identify nuanced behavior, including bullying, predatory actions, and emotional distress. The result is a precise, scalable tool that gives parents, educators, and coaches an opportunity to intervene before harm occurs.
Kertai: How does ProtectMe recognize quickly evolving language?
Kerbs: Kids constantly invent new ways to communicate, using slang, emojis, memes, and coded language that can make harmful behavior harder to spot. To stay ahead, we train our models continuously, pulling from real-world examples and community input to recognize how online communication evolves.
But we don’t just focus on detection, we also work to understand how that language is used. Our AI system is guided by a proprietary threat taxonomy, developed with experts in child psychology, cybersecurity, and online behavior. This taxonomy helps us categorize threats based on intention and impact, not just words. That way, when new phrases or behaviors appear, our system adapts intelligently, ensuring we don’t miss dangerous signals hidden in evolving digital lingo.
Kertai: What kind of information do you collect, and how do you protect user privacy?
Kerbs: We analyze in-game voice and text to detect potential risks, but we treat privacy as a non-negotiable priority. We never store or share raw recordings or full chat transcripts. Instead, our system processes data in real-time, extracting only the essential insights needed to flag problematic content, and does not retain any personally identifiable information from the communications themselves. To protect user data, we use strict security protocols, including end-to-end encryption. Depending on the use case, processing happens either locally on the device or in secure cloud environments.
We’re fully compliant with global privacy laws, including the Children’s Online Privacy Protection Act, and we’re transparent with families and partners about what we collect, why we collect it, and how we use it. For us, privacy and safety are inseparable.
Kertai: Does the system rely entirely on AI, or are people involved too?
Kerbs: While our system handles the heavy lifting, monitoring, detecting, and classifying threats, human oversight is still critical. Our team of analysts reviews edge cases, fine-tunes risk models, and ensures the alerts we send are both accurate and actionable. Human reviewers bring context and cultural awareness that AI alone can miss. We see our AI system as a force multiplier, fast, scalable, and consistent, but combining it with human judgment is what truly makes the system effective.