Home PublicationsData Innovators 5 Q’s for Jennifer Woodard, Co-founder of Insikt Intelligence

5 Q’s for Jennifer Woodard, Co-founder of Insikt Intelligence

by Hodan Omaar
by
Jennifer Woodard

The Center for Data Innovation spoke with Jennifer Woodard, co-founder of Insikt Intelligence, an organization based in Barcelona, Spain that uses AI to develop investigative tools that security agencies can use to combat online harms such as terrorism, hate, and human trafficking. Woodard spoke about how the metaverse complicates efforts to counter online crime and the impact the EU’s AI Act may have on innovation in this space.

Hodan Omaar: How does Insikt Intelligence use AI and other emerging technologies to counter cyber, physical, and political terrorism?

Jennifer Woodard: Our technology uses a combination of social network analysis, natural language processing technologies, and our own custom machine learning methodologies to power the detection of problematic content and threats online. This combination is quite unique. Many other detection services out there are focused on keyword searches to find harmful content and then analyzing the rest of the content around it. We’ve developed something a lot more complex than that. We are able to detect how and where harmful content is being propagated and identify relationships between spreaders of this content, such as criminal actors. And perhaps most importantly, our tools are really accurate! 

One of the reasons we have highly accurate models is because we spend a lot of time doing research into novel AI techniques and absorbing this scientific effort and know-how into our commercial solutions. What’s more, we build custom machine learning models for specific domains. For example, the models we build to detect COVID-19 conspiracy campaigns are different than the ones we build to detect online pedophile rings or instances of human trafficking. That might seem obvious, but when I engage with stakeholders and law enforcement agencies sometimes I find that they are using generic social media analysis tools designed for things like influencer understanding or sentiment analysis meant for marketing to detect very complex things such as terrorist activity, so tools that are not built for purpose. 

Omaar: As the methods terrorists use to recruit and spread propaganda develop and extremist content becomes more broad, AI innovation for robust detection models will be crucial. How do you think the EU’s AI Act might impact the bloc’s ability to defend itself?

Woodard: First, let me say that the intention behind the AI Act is laudable. We certainly want to build AI that preserves the privacy of individuals, and benefits people, and as a company ethical AI development is at the heart of everything we do. However, some of the requirements for things like transparency and explainability that are being discussed must be reconciled with innovation in practice, so as not to slow down advancements in this area.

Consider, for example, the neural networks with hundreds of millions of parameters we use to power our machine learning models. As humans we cannot fully explain the way in which these neural networks make every single connection or decision that they do but we know they are incredibly accurate. We can only explain it partially with a set of inputs and the models’ outputs.

If we want to go back to using things like decision trees, where we can explain every single thing, we’ll have transparency but we won’t have the same level of accuracy. There is a  tradeoff. Policymakers must consider this tradeoff as they work on regulations, especially as it relates to crucial security related issues. There is also the issue of access to data for building complex AI models. On the one hand, the EU is calling for proposals and research projects for solutions that will detect radicalization at the earliest stages and on the other hand many of us in the AI research community have experienced that when we work on such projects we are confronted with privacy and ethical requirements that preclude us from accessing many types of data we need to build these solutions accurately and effectively.

Finally, there is the issue of the impact of regulations on startups and small and medium-sized enterprises (SMEs). Everything we are talking about around detecting threats and harmful content touches on law enforcement and public safety, some of which are considered high-risk AI systems under the law. Smaller companies developing these solutions are therefore subject to the costs this regulation might impose upon those developing or deploying such systems. Because the potential fines for non-compliance are so high, there is a risk that some of these innovations may not make it to the market. That is the other tradeoff policymakers must consider if they want European AI innovators to remain competitive.

Omaar: In what ways might the metaverse complicate efforts to counter terrorism and violent extremism?

Woodard: Platforms in the metaverse will make the job of content moderation more complex from a technical point of view. We’re already seeing this with platforms like the Roblox gaming platform and I assume it will appear within Meta’s new augmented reality and virtual reality platforms. The rules we have been developing for traditional online spaces don’t really yet exist for such platforms—it’s the wild west. Traditional social media still suffers from content moderation issues both in terms of the detection and removal of harmful text and multimedia content, so these issues will just become more difficult to solve in the metaverse. Further, there are new considerations of how content propagates and spreads in these virtual spaces that we’ll have to figure out. 

Omaar: Insikt recently joined the UN’s group developing international standards for security-related ICTs. Why is standardization important in this context?

Woodard: Yes, we are thrilled to have joined ITU and have the ability to take part in important discussions around standardization work, which is crucial to reflect the needs of organizations in our sector and also defend users of this type of technology. AI is an area that could really benefit from international guidelines and standardization, particularly if we consider some of the things we were talking about like transparency, explainability or even bias. Clear standards for how to build systems that safeguard against potentially harmful uses of AI and make sense from a technical perspective don’t exist in the EU context and having such standards I believe would go a long way to supporting some of the goals behind the AI Act.

What is important is that these efforts are developed hand-in-hand with industry and innovators. While well-intentioned, the government doesn’t have full purview of all the different ways AI technologies are built, how they are deployed, and the data needed to ensure they work effectively. Working with industry to develop standards and  for companies to use can support innovation and responsible use of AI, as well as competitiveness more broadly.

Omaar: Just as AI is used to detect threats of terrorism, it can also be used by terrorists themselves. In what ways have you seen AI be maliciously used and how might this develop in the future?

Woodard: One of the most important areas where AI is being used maliciously is in informational warfare. Disinformation itself isn’t new but the scale and complexity of recent campaigns have increased as they are being deployed at an industrial scale—disinformation as a service. In some of our earlier research, we found that there is a private-sector market for disinformation, meaning threat actors offer to spread disinformation in underground criminal forums for those that are willing to pay. AI supports these efforts, not only by creating new forms of disinformation but also by helping propagate existing forms of disinformation in new ways that don’t trigger current content moderation controls and augment human ability to create this type of content at scale. 

What’s coming next with the criminal use of disinformation is much worse than what we are currently seeing. Given the geopolitical situation we find ourselves in, the main thrust of our research at the moment is focused on detecting disinformation and we plan to create new commercial solutions based on our research in the near future.

You may also like

Show Buttons
Hide Buttons