Home PublicationsData Innovators 5 Q’s for Lyric Jain, Founder and CEO of Logically

5 Q’s for Lyric Jain, Founder and CEO of Logically

by Benjamin Mueller
by

The Center for Data Innovation spoke with Lyric Jain, the founder of Logically, a startup based out of Yorkshire that uses AI to detect misinformation and provide fact-checking services to counter fake news. Jain discussed Logically’s approach to fighting fake news, the types of AI models the company deploys, and outlined a strategy to stem the growing tide of misinformation by building a collaborative ecosystem of partners.

Ben Mueller: What’s what makes Logically different in how you approach fake news?

Lyric Jain: The whole thesis comes down to trying to act as close to real-time as possible, either taking proactive measures or responding pre-virality so that we can prevent any harm from happening in the first place. In the last few years, many organizations publish fact-checks or report people after a certain level of virality has been reached. By that point in time, the damage is already done, and clean-up is difficult, if not impossible.

So I think that where a lot of our uniqueness comes in from a value proposition and in technical terms is that our history is slightly unusual for a tech startup. We spent two and a half years just doing cool tech stuff. We were an enlarged R&D team working on technical challenges around the question of how we model social discourse. How do we get computers to understand what news looks like, and what social commentary associated with that news looks like? As a result, we don’t just have one set of models that suggest whether something’s misinformation or not. We have a whole suite: our credibility systems working with our veracity assessment systems, and all of those working with our threat and influence assessment systems. All of this is expert-guided because we don’t want one of our AI systems to go loose and do harm. So all of it is still moderated and monitored by experts in our open-source intelligence team, or our fact-checking teams, or even our clients in some cases.

Mueller: What AI methods and technologies do you deploy at Logically?

Jain: We use a lot of the newer developments around AI, like large transformer models to help model language, and make customizations to them to ensure that they work on short-form text as well as long-form text.

In different areas, we feel different specific approaches to AI work better. In some cases, for example, rules work even better than a really solid machine learning model. And some cases, machine learning—in our case, deep learning—works best.

The fundamental modelling aspect varies depending on what specific technical problem we’re trying to solve. We need this to work on hundreds of millions of pieces of content, so we distill a lot of techniques. We use natural language processing if we just want to structure unstructured data. Our Credibility Suite is a huge ensemble of deep learning models as well as other machine learning models. Knowledge graphs are central to a lot of our reasoning systems. 

Speaking broadly, our focus is on computational linguistics and a lot of the technologies that make that up. Outside of that, we use other kinds of network analyses, particularly to model metadata.

Mueller: Disinformation, election interference, fake newsthis has been on the radar for almost half a decade now. How do companies like Logically deal with this big political challenge democracies face?

Jain: It has been interesting to see how the space has evolved in the last four or five years that we have been involved. The role that we see for ourselves is as an ecosystem builder. We have products that are in use by everyone from governments, to platforms, to third-party fact-checkers, and also individuals. 

Looking at the work that other organizations have done, the growth in the world of fact-checking has been great to watch. It’s a testament to the great dedication of those teams. Without fact-checking, we would probably be in a greater mess in terms of where the world is today. But generally, these teams are limited regarding their scale because they’re not really technology organizations. We’d like to help such organizations have more impact, either by doing more of it, doing it more swiftly, or reaching audiences that are typically hard to reach.

We find ways to mitigate information threats by mapping them to either a platform policy violation or connecting it to an individual actor or a nation-state organization. So that’s the value we bring to that kind of top-down approach. The company’s larger goal is to create an ecosystem to build the capacity to fight back against misinformation and disinformation.

We used that ecosystem approach in one of the first markets we entered: India. Today we have all kinds of stakeholders using our platforms in that country, which has many advantages. For instance, we have users of our app who share disinformation directly from WhatsApp with us, which no one has seen before, since not even WhatsApp has access to message content. This collaborative ecosystem allows us to understand what’s going on in specific communities. We’re then able to share that intelligence with platforms and with governments. 

We have worked with governments in several settings, from election integrity to public health to national security, where our intelligence platform has been used to identify threats, be it on a content level, an account level, or based on specific interactions. And we work with platforms to directly help them identify and deal with specific threats. For instance, we’re supporting TikTok to identify misinformation that might harm users in the UK.

Mueller: How does this model scale?

Jain: The intelligence we share from the platform is fully automated, but we don’t want the response to this intelligence to be automated because that’d be fairly irresponsible. We want the response to be validated. We want it to be connected to a proportionate and effective countermeasure. Not every bit of misinformation, not every bit of hateful content needs to be taken down from the Internet. That’s really where we see the value of human expertise: we have expert analysts both in our team, in our partners in this broader ecosystem that we’re developing, and with our clients, if they have that expertise.

What gives us confidence in our scaling is the partners that are developing alongside us. So we want to ensure that we’re building this ecosystem in the spirit of partnership. As more people want to use Logically’s products and services, we bring in other people who have the local expertise and cultural nuance to help decision-makers understand what looks like a proportionate and appropriate response to any ongoing threat.

Mueller: What sort of opportunities and obstacles do you see ahead for Logically?

Jain: One of the trends that we see that’s particularly worrying is that organizations that historically were nation-state actors are moving into the private sector to conduct influence operations on IPO events or particular brands, effectively going to war on social media. There’s a pretty low barrier to entry, and it is probably more profitable to be one of those organizations than to be Logically at this particular moment in time. 

The related opportunity is it broadens the universe of people and organizations that we can work with. Another big opportunity that we see is a lot of sensible conversations happening around the world on platform regulation, as well as regulation around harmful content. It’s quite rare to find an industry where the companies operating in it are expressly calling for regulation, which Facebook and Google are. A lot of governments and non-governmental organizations are seriously thinking about what frameworks would be responsible and effective. 

And we see that as a huge opportunity because we are already trusted partners of many governments and platforms. We feel that there’s a great role for us in the capacity-building exercises that a lot of democratic countries are going to need to go through, such as what standards look like, and having greater transparency around how platforms’ internal organization works around these policing challenges. We need to give governments some form of say when it comes to threats to democracy or national security.

Logically can provide oversight on how well platforms are functioning, and escalate any threats that need a governmental eye on them. Governments should play a role, not as arbiters of what’s true or not, but by setting guidelines around what types of threats need to be dealt with in what way and have common definitions and common standards around them. Independent organizations like ours can then come in and effectively act as an S&P or Moody’s for online information and activity and assess risks associated with it.

On the obstacles, there are limitations around what’s possible today using the current state of technology. We excel when it comes to textual content but are fairly limited when it comes to image media. While we have some basic capabilities around deep fake detection, those types of adversarial technologies are developing really quickly. Millions are being poured into better and better deepfake production technology, and that’s not being matched by the so-called good actors in our world. 

And one big regulatory challenge that sometimes doesn’t get mentioned in this space enough is that not every platform is Facebook. Facebook invests billions into building systems to deal with malicious content, but those resources aren’t available to all platforms and content providers. So regulators need to look at this on a cross-platform level and not institute ever greater barriers of entry to social media platforms, which in fact just helps Facebook’s business.

 

This interview has been edited and condensed for clarity

You may also like

Show Buttons
Hide Buttons