Home PublicationsData Innovators 5 Q’s with Maarten Stolk, Co-Founder & CEO of Deeploy

5 Q’s with Maarten Stolk, Co-Founder & CEO of Deeploy

by Patrick Grady
by

The Center for Data Innovation spoke with Maarten Stolk, co-founder, and CEO of Deeploy, a company that provides a platform to help developers understand the outcome of their AI models. Deeploy is headquartered in Utrecht, the Netherlands. Maarten discussed explainable AI, including where AI explanations are especially in demand and future innovations in this field.

Patrick Grady: Why did you decide to launch Deeploy? What were your initial goals?

Maarten Stolk: I used to work as a data scientist and learned the hard way that operationalizing AI models is difficult. A lot of questions arose when I wanted to operationalize: “Do we actually monitor what’s happening?”, “Do we understand how we came to a conclusion?” “Can we act quickly when things go wrong?” “Is it clear who is responsible and accountable for something?” “What happens when people get feedback?” I believe by design, we should be able to answer those questions when building AI systems. And that’s why we started Deeploy in the first place

Grady: How do you understand the term explainable AI?

Stolk: To me, it’s quite a broad term. In some ways, it means explaining how AI decisions are made to end users. This is often referred to as local explanations. But also explaining what was used to build the AI system or the model, what the dataset looks like, and which controls we have in place are part of a broader definition of explainable AI. Much of it also comes back when we talk about AI governance, which includes who takes ownership and how the processes are defined to act on alerts and feedback.

Grady: Where do you see the most demand or utility for a product like Deeploy?

Stolk: Where AI, in general, has the most value at the moment, at least from my experience, is the financial industry—especially if you look at the European markets. Our financial system has not been working optimally. It can take months to get a bank account, especially if you go to the more traditional banks. There are lots of processes we can and should automate using AI. It’s really frustrating as an entrepreneur when, for the most part, you can’t get bank accounts within several hours. You can’t start doing your business, or you can’t pay your bills if you don’t have a bank account. So I think we should use AI much more in the financial industry. For example, for credit risk or loan applications or know-your-customer (KYC) processes, or transaction monitoring. Those are all typical use cases where AI has a big impact, but when things go wrong, they can go terribly wrong for individuals. Imagine you lose access to your bank account and cannot pay your bills anymore. So we need to be absolutely in control of what’s happening.

Healthcare is also interesting for us. There’s so much you can do with AI, from diagnostics to early disease prevention. We collaborate with one customer called NiceDay, a Dutch company providing an app for mental health care. It tracks clients who struggle with anxiety or depression and tries to act early on the data. Whenever you notice in the data that things are getting worse, you preferably want to act on this as soon as possible. But defining what patterns indicate deterioration is complicated. There are lots of different variables that together define well-being, and both explainability and the feedback loop are crucial here. I think you’ll see those applications more often in the healthcare industry because using AI can save lives. At the same time, it’s clear to everyone that explainability and control are crucial to apply AI responsibly.

Grady: What is the role of regulation, particularly the EU’s AI Act, in making AI more explainable?

Stolk: There’s still a debate about where it’s going and what’s included in the final Act. In general, we have to make sure we’re in control, which is defined as effective human oversight in the AI Act. This includes concepts like monitoring, explainability, traceability, clear ownership, and feedback loops. It’s case by case depending on how far you go to enforce human oversight and in which way it’s most effective. Hopefully, the AI Act is forcing us to think by design about these aspects when working on an AI system without losing ourselves in the development of just the AI model. It provides some legal certainty, which means it also encourages innovation in a more sustainable way.

Grady: What are some examples of future innovations in this field?

Stolk: Where research used to be really focused on explaining a model, it’s starting to become more of a conversational XAI topic. Basically, if you see AI as one of your colleagues, you want to be able to ask them different aspects of their decision process, like “Which data did we use?” or “What’s most important?”, “What if we change male to female, would it give a different output?” Those are the kind of questions you want to be able to ask an algorithm.

We have been working quite hard on a chatbot to interact with AI models, in which it gives different types of explanations back to end users, depending on the explainability needed. Different explainers are needed in different contexts for different kinds of users. And I think the world is slowly moving to having a conversation with AI algorithms and not just providing one kind of explanation, which may not be really understandable, or giving the answers you don’t need.

You may also like

Show Buttons
Hide Buttons