Home PublicationsData Innovators 5 Q’s with Ksenia Iluik, Co-founder of LetsData

5 Q’s with Ksenia Iluik, Co-founder of LetsData

by David Kertai
by

The Center for Data Innovation recently spoke with Ksenia Iliuk, Co-founder of LetsData, a Ukraine-based company developing machine learning to counter misinformation. Iliuk explained that LetsData’s models detect coordinated bot campaigns on social media platforms by scanning millions of posts daily, identifying unusual activity, and providing companies with situation specific reports and response recommendations. 

David Kertai: What inspired the creation of LetsData?

Ksenia Iliuk: My co-founder Andriy Kusy and I created LetsData after experiencing the limitations of tracking information operations firsthand. We found that existing tools like keyword-based social media trackers and brand-monitoring systems reacted too slowly and flagged problems only after a misinformation campaign had already spread. These tools struggle to keep pace with the constantly evolving tactics of coordinated bot networks. We wanted to build a more proactive solution, one that anticipates and detects manipulation early, and delivers user-specific recommendations before it can cause real harm, especially as bot campaigns have grown both more frequent and sophisticated.

In 2021 and 2022, Andriy and I began experimenting with machine learning models to spot early signals of coordinated online activity, such as sudden surges in account creation, synchronized posting behavior, and shifts in messaging across platforms. Those experiments ultimately led to the creation of LetsData.

Kertai: How does LetsData’s machine learning system work to detect misinformation campaign patterns and narrative shifts?

Iliuk: We use multiple machine learning models trained on real-world examples of bot campaigns. These models scan millions of posts daily across multiple platforms and languages, looking for unusual activity, such as the rapid spread of identical hashtags across unrelated accounts, repeated sharing of the same links by newly created profiles, or coordinated commenting patterns that push a specific narrative. For example, if 50 new Facebook accounts appear simultaneously from the same IP address or share similar profile traits, that’s an anomaly that isn’t typical user behavior.

When our system detects such patterns, it automatically generates alerts and reports for clients, such as governments, banks, and consumer brand companies. Each report includes multi-step recommendations for countering misinformation campaigns, such as flagging false content, issuing public clarifications, and engaging with affected audiences, enabling organizations to respond quickly and effectively. To ensure accuracy, we maintain a human-in-the-loop approach: experts guide model training, validate outputs, and confirm that alerts are both relevant and actionable. This combination of automation and human oversight allows us to detect emerging campaigns early and reliably.

Kertai: What challenges do you face in keeping up with evolving threats?

Iliuk: The biggest challenge today is the growing sophistication of bad actors leveraging AI tools. In the past, bot networks were easier to detect because their messages were repetitive, poorly formatted, or filled with grammatical errors. These campaigns also required significant human input, which limited their scale.

By contrast, today, AI tools enable bots to maintain consistent identities, engage in realistic conversations, and execute scams with minimal human oversight. This allows bad actors to combine scale and quality, deploying large volumes of bots for misinformation or scam campaigns that are far more convincing and harder to detect.

Kertai: What are the greatest opportunities from using your service to detect misinformation ?

Iliuk: One key benefit of using LetsData is that it is more cost effective for monitoring large-scale misinformation campaigns than relying on separate, manual detection methods.

In the past, spreading misinformation was cheap for bad actors, while defenders faced high costs because they needed separate, specialized detection modules for each platform or scenario. Today, we can train a core machine learning model and rapidly adapt it to different contexts, enabling us to respond to emerging campaigns without sacrificing accuracy. By reusing and fine-tuning a single model across multiple use cases, we’ve shifted the economics of detection in our favor.

Kertai: Can you share any real-world use cases?

Iliuk: One notable real-world case involved helping a U.S. company counter a large-scale impersonation campaign ahead of an AI product launch. The bot campaign originated in Vietnam, where individuals created AI-generated advertisements that mimicked the company, claiming the company’s product had already launched, and directed users to fake chatbots designed to distribute malware. Our system detected the threat early and provided case-specific steps that allowed the company to take swift, informed action, shutting down the campaign before its official product launch, protecting both its users and reputation.

You may also like

Show Buttons
Hide Buttons