Home IssueArtificial Intelligence 5 Q’s for Viktoras Daukšas, Head of Debunk.eu

5 Q’s for Viktoras Daukšas, Head of Debunk.eu

by Eline Chivot
Viktoras Daukšas

The Center for Data Innovation spoke with Viktoras Daukšas, head of Debunk.eu, an initiative in Lithuania to identify and respond to disinformation online. Daukšas discussed how Lithuanian journalists are using its technology to efficiently respond to false information to stop the spread of misinformation.

This interview has been edited.

Eline Chivot: How does Debunk.eu’s rapid alert system work, and what do you monitor?

Viktoras Daukšas: Debunk.eu is a Lithuania-born initiative to efficiently monitor trending and potentially harmful information narratives. It includes an AI-based analytics tool to spot and identify topics of interest in online articles within two minutes from real time; “elves” (volunteers) and journalists from civil society who verify claims; and newsrooms with significant national coverage.

The idea to start the initiative was influenced by a geo-political climate in Lithuania. For a long time, the country has been a target of Russian disinformation campaigns. In 2017 alone, the country faced 55,000 cyber-attacks and tens of thousands of instances of disinformation, which were nearly impossible to track manually. We see no slowdown in the intentional spread of disinformation targeting trust in existing models of governance, and over time it is becoming even more advanced and creative. Being a country between East and West we realize how important it is to deploy preventative measures. We clearly saw what consequences hybrid warfare may have when it happened in Ukraine, and we are willing to use our knowledge and skills to not let it happen again in Lithuania.

The efforts to spread disinformation are well coordinated by well-trained actors, whereas debunkers—the media, think tanks, academia, fact-checkers, “elves,” and StratComsmostly operate individually. An analysis of 100+ organizations proved that most parties used the outdated 2G approach, where the first G stood for Google Search (manual monitoring) and the second G for “gut feeling” (no data-driven evidence), mostly because the technology needed was too expensive. With Debunk.eu, we have tackled the problem in several ways. First, we monitor over a thousand domains which historically contained some instances of disinformation, and approximately 20,000 articles a day within those domains. Second, we use AI to spot disinformation. Third, we boost the productivity of journalists, think thanks, fact-checkers, StratComs, and academia by providing them insights about the most potentially harmful content online and by automating at least 50 percent of manually performed tasks. Fourth, we united major media outlets in Lithuania under one platform and enable them to boost media literacy of the country’s citizens.

Our operational scheme is based on the following pillars:

  1. The system scrapes tens of thousands of articles daily in Lithuanian, Russian and English languages.
  2. The platform automatically labels narratives and other data and assigns them to different categories.
  3. AI spots potential disinformation within two minutes from real time based on known disinformation narratives.
  4. Elves and fact-checkers validate the articles with the potentially most harmful narratives (on average, 2 percent of the entire content scraped).
  5. Elves and fact-checkers select real disinformation cases and mark them in a system.
  6. Journalists are automatically notified of the articles.
  7. Journalists write articles which debunk false information.
  8. Articles are published in mainstream media channels to reach 90 percent of the Lithuanian audience.

The platform saves three to four hours per day per journalist, previously spent on manual work. Currently we scrape articles in three languages—Lithuanian, Russian and English—but the goal for 2019 is to scale the platform to all official EU languages.

Chivot: Does your team rely on a particular hierarchy of sources? Is there a rule of thumb about the reliability of certain sources over others?

Daukšas: We track articles based on narratives, and take into account the historic occurrences of disinformation and the readability of the website. However, given the shift in what society perceives as the highest threats, we are moving from the concept of completely made up stories towards a definition of misinformation that takes into account domains which may usually be considered reliable, but may include misleading political statements, distorted facts in opinions and claims, etc.

Chivot: How do you decide what sort of facts to check? Are some issues too complicated to take on?

Daukšas: What are the most trending topics in Spain today? What stories had the highest outreach in the media and social networks in the UK yesterday? What if some disinformation slipped through articles which raised the highest interest in society? These questions are a headache to fact-checkers willing to track broad narratives in which these potential instances of misinformation and disinformation instances may occur, e.g., trending regional topics, statements of politicians, economic claims, etc. Unfortunately, most fact-checking organizations still lack tools which would aggregate, categorize and analyze relevant data and could make their work more efficient by saving hours of manual work. This is where our tool takes the stage—based on the relevance of the topics, the probability of disinformation and misinformation within an article, and the reach on the web, we fully adjust the platform to the users’ needs, thereby freeing their time to do the essential—identify and debunk false information.

Chivot: What is an example of how you have contributed to detecting and debunking a malicious campaign?

Daukšas: In May 2018, pro-Russian separatists spread the “news” that three NATO soldiers died and two were injured when their car hit the mine near Avdijivka in East Ukraine, near the front line. Fake news quickly resonated through Russian propaganda channels. Fortunately, Lithuanian elves took action, and after detecting that disinformation case on our platform, it was debunked in two hours, preventing it from spreading from the original source. DELFI, the biggest online portal in the Baltic states, published an article informing the society about the case of disinformation.

Chivot: Ahead of the European elections approaching, how are you and other fact-checkers working with EU policymakers, national governments, online platforms? And how can this “teamwork” be improved?

Daukšas: It is our biggest target for 2019—to scale the platform across the EU to help countries counter disinformation and misinformation in their own markets. It is vital to understand that those who spread disinformation act in a coordinated manner and are well-trained, whereas debunkers mostly operate individually. In order to efficiently tackle the issue, we have to reach out to and connect fact-checkers, think tanks, academia, media organizations, and local StratComs to create a European-wide community to counter disinformation. For the upcoming elections in Europe, we have already taken action and are discussing with two states to help them them track online trends during this important democratic process.

You may also like

Show Buttons
Hide Buttons