Home PublicationsData Innovators 5 Q’s for Guillaume Bouchard, CEO and Co-Founder at Checkstep

5 Q’s for Guillaume Bouchard, CEO and Co-Founder at Checkstep

by Patrick Grady
by

The Center for Data Innovation spoke with Guillaume Bouchard, CEO and co-founder of Checkstep, a London-based start-up providing artificial intelligence (AI)-powered products for contextual content moderation to online platforms. Bouchard spoke about the use of AI in content moderation, the importance of explainability, and emergent risks.

Patrick Grady: What motivated you to set up Checkstep?

Guillaume Bouchard: This was a side project to act quickly against the negative sentiments online during the first Covid-19 lockdown. It became a real thing when we realized online harm regulations were planned for 2022. These legal requirements would push further the need for a well-designed trust and safety platform across all the online platforms that were desperate to include more social features in their product. For the first time, I could work on a business idea that had a real impact on society and that everyone understood.

Grady: How does AI aid scalability in content moderation?

Bouchard: Today, AI is synonymous with automation, and its main benefit is mostly about scalability. In the beginning, online platforms have a moderate amount of content can treat content moderation as a customer service problem, where humans are checking every piece of content created on their platform. This obviously does not scale, and at some point, platforms need to automate. They often start filtering by keywords because it is simple and fast to implement, but it has its own problems. Ultimately, you want a data-driven system that can adapt quickly to changing needs. This is what AI is about.

Grady: Explainable AI is part of the solutions that Checkstep offers to clients. Why is it important that AI gives a human-readable explanation for each decision it makes? Are there any drawbacks to using explainable AI?

Bouchard: Explainable AI is necessary if you want to understand what your moderation system is actually doing, and it becomes harder and harder as AI accuracy reaches human performances. There is nothing more frustrating than a system that does not work and there is no way to know why. Another reason is fairness toward your users. Specific categories of people could be unfairly treated if an AI cannot be inspected and if its biases are not well understood. Regulations are under way to make explainability and auditing mandatory for applications such as content moderation.

Grady: How do you monitor and respond to emerging threats in content moderation?

Bouchard: New types of unwanted behaviors must be quickly identified and mitigated. As Checkstep, we focused on fast “adaptation.” It sounds like a magical word, but if deployed correctly, capabilities such as real-time re-training of machine learning models, the addition of new filters, or fast creation of content-specific queues, enable our partners to improve in near-real time.

Grady: The digital space is constantly evolving; for instance, with emergent virtual platforms and spaces. What are some examples of future risks?

Bouchard: The metaverse is often referred to as the new digital frontier and new unanticipated risks are already appearing. Even existing risks, such as impersonation, will become even more pronounced due to the difficulty to differentiate between the virtual and the real. In particular, all these new AI-powered content creation platforms make it so easy to create realistic profiles that the need to protect users becomes even stronger. We basically need a way to empower the users embedded in the metaverse to quickly differentiate authentic behavior from actions that are meant to deceive you in the future. It’s not easy and will probably come from tighter integration of the safety features with the platform itself.

You may also like

Show Buttons
Hide Buttons