The Center for Data Innovation spoke with Harleen Kaur, the chief executive officer of Ground, a startup in Waterloo, Canada that uses artificial intelligence (AI) in its mobile news app. Kaur spoke about how her team is using AI to combat fake news and burst users’ filter bubbles.
This interview has been lightly edited.
Daniel Castro: The news media have long struggled with accuracy, integrity, and sourcing, but there is renewed focus on “fake news.” How does your company hope to address these problems using AI?
Harleen Kaur: A couple of things have happened over the last around 10 years. One is the advent of social media. Any so-called news can get picked up and gets propagated very, very easily, which is great as well. That means that anybody sitting somewhere can propagate news and publish news, but at the same time Facebook and Twitter never set out to be news platforms and to have the checks and balances to be able to stop the propagation of news that shouldn’t be propagating. I think this has been a big-time issue.
For example, if a normal person were to see a tweet or Facebook post that has been retweeted a few thousand times or their best friend has liked it, you just take that as an indicator of veracity as opposed to saying “Yeah, I don’t think that what I’m seeing is true or not.” So volume of likes, or somebody you knowing liking it, is taken as an indicator of veracity, which is not helping. Second, I think especially in the U.S. and after the elections or even during the elections, it has become very apparent that the news organizations lean different ways. And so even with the news, the veracity is not in question. It’s true news, but the spin on it different media outlets are putting is becoming more and more apparent. Now more than ever people need to be on a kind of balanced diet of what the different takes of news are.
So we are addressing these two issues. Firstly, we are making sure that the news that we put on Ground has been totally vetted. We take news from traditional media, breaking news on social media as most of the breaking news is happening on social media, and news that has been reported by our own users, and then use AI to make sure that this news is true. And only then it makes it to the platform. And to address the second problem of biased news, we are putting news outlets on a political spectrum side by side, from global to local news outlets as well, who are most relevant to a news story and displaying them so people can very easily click through them and read what’s going on and just be aware of the biases.
Castro: How do you manage the tension between breaking news, where sources and verification are limited, with ensuring accuracy?
Kaur: Yeah, that is a big problem. We do a few things with AI. So how our platform works is we constantly monitor the world as a grid for breaking news in the sense that whenever there is a heightened social media activity in a certain part of the world, we realize that there is something more unusual happening and possibly news. And then our system starts looking at what are the common words being used, or the relationships between the words, for example. If there’s the word “fire” repeating in a certain block, then our system realizes there’s a fire going on there.
And then we check the people who are posting. We can check their location history, that their posts have been from there before, and that they are not actually sitting in Siberia or Russia or wherever. And if they are posting content, we check the metadata of that content. So if there is a photo or video, we check that the photo or video originated that day at that time. After our system is happy with the newsworthiness and the veracity score, only then do we import it on to the platform.
Castro: AI has been used to create “deep fakes,” such as a video of former President Barack Obama making a public service announcement that he never in fact made. It seems like there’s a bit of an arms race in terms of using AI to make fake news versus preventing it. Are there new challenges when verifying news when video can be so easily forged?
Kaur: Yes. That’s the second part of what we do. We check the metadata of the file, photo, or the video that we are seeing. We can read the signature to see it hasn’t been tampered with. And then we do some image and video analysis to make sure that there haven’t been any changes made to that raw file. We can catch that.
AI is technology and you can use it in different ways. For example, the most simple, seemingly harmless way is how Apple News or Google News use AI to maximize your reading time. They’ll keep showing you more of the stuff that you like so you’ll keep clicking onto the next news and read it. There is nothing harmful with that but it’s just that you end up reading more and more of what you like and it informs your biases. What we want to do is serve you news that is verified, but also news that might make you uncomfortable. We’re going to show you Breitbart if Breitbart is accurately reporting on a news story. So it’s just how you use that technology.
Castro: What is a real-world example of where your app has successfully identified fake news?
Kaur: One interesting one was the Irish abortion referendum that happened a couple of weeks ago. There were some news outlets falsely reporting that the pro-abortion wing was turning violent. That was one of the news stories we were able to stop, and it’s not just that we could catch the doctored videos. We also actually had users in Dublin,who would upload photos and videos that we could verify, say “Well, actually everything is very peaceful.” The furthest people are going is just singing or putting something on Post-it Notes. That was a great example that Ground could be used to stop the propagation of fake news.
Castro: Some people have begun to weaponize the term “fake news” to discredit true information. How does your system work to prevent false positives?
Kaur: First of all, I find it really, really unfortunate the way the term has been weaponized as you say. We at Ground truly believe, and we have Melissa Long onboard who is a very experienced journalist, that journalists go through a lot of trouble to verify what they are about to publish and put their credibility on the line before they publish anything.
What we do on our end is to make sure that all these verified stories surface up to the ground and people can read it. If somebody is putting their integrity on the line, and actually sometimes their life on the line, to be able to break the story, we make sure its visible to the people on our own platform.
We do that by getting news from 15,000 publications and vetting those publications based on when they were established, which is a very interesting factor. One of the reasons that fake news spreads is that outlets can be formed instantly on the Internet, and people just publish news that looks like a news outlet but has only been in existence for one day or even half an hour. We make sure that the news outlet has been around for a while, has published stories, and is accredited.
Ground is just one tool. But what we are enabling people to do is to be able to report and verify or debunk any news story. Whenever there is news that breaks around you, we send you a notification that says “This is what has been reported around you. Is this true or not? Can you verify or debunk it?” This is crowdsourcing the verification of news. More and more people can participate in that and become part of the solution. I think that will really help because grass-roots credibility can only come up from the ground up. Litigation is not going to help, and I think a single organization just labeling news outlets as trustworthy or not is not going to help.