Home IssueArtificial Intelligence 5 Q’s for Mikael Munck, Founder and CEO of 2021.AI

5 Q’s for Mikael Munck, Founder and CEO of 2021.AI

by Eva Behrens

The Center for Data Innovation spoke with Mikael Munck, founder and CEO of 2021.AI, a company in Denmark that provides management and oversight of enterprise AI systems. Munck discussed the challenges companies that operate internationally face when having to comply with multiple sets of AI regulations, what responsible implementation of AI systems can look like, and how bias in training data causes bias in algorithms’ outputs.

This interview has been edited.

Eva Behrens: What challenge or opportunity did you identify that led to the founding of 2021.AI, and how do you aim to address it?

Mikael Munck: I was working as a global head of technology and operations in the Danish bank Saxo Bank, and we were investing in machine learning capabilities. It was around 2010 when we started a few specific projects there. And it was not easy, rather super difficult. To secure experiences in the field, we had to hire people from New York and London who had worked with machine learning. Initially, we developed trading and hedging models and with great results and success. But it was very complicated and very expensive. At that time, we had to code the models from the bottom up as there were no open source libraries or anything like that available. So that was one challenge.

The other challenge was the challenge of re-training these models, running them in production, and integrating them into the rest of our tech stack. That was not a trivial task and a very manual task. The good news was that we could do it because we were a large, resourceful IT organization. This also made me realize that not a lot of companies in the world would have the same capacity to work with AI and machine learning, which meant that this technology would only be in the hands of the few. I realized that there would be a need for an easier way to both develop, deploy and operate the AI and machine learning models. And that is at the core of what we do at 2021.AI.

While our initial focus certainly was on the development and deployment of AI, we would soon realize that solved only half the challenge. The second half was to ensure that these new technologies complied with regulations and ethical guidelines—in short, responsible use of these technologies. So those two components, in combination, are what you need to be using AI and machine learning today. You cannot have one or the other; you need to have both.

Behrens: Different jurisdictions, including the EU, the UK, and several U.S. states, are developing regulations for AI systems. What general trends do you see emerging in AI regulation, and what are some of the steps companies will have to take to comply?

Munck: We started our work with AI governance more than three years back with the EU Commission’s Ethics Guidelines for Trustworthy AI. We were one of the 50 companies that the EU worked with developing these ethical guidelines for trustworthy AI, which led to the ALTAI and later to the EU AI Act. And what we also learned was that in the EU, it is the impact of the models on humans that is the focus. The focus is not necessarily the same everywhere. Other parts of the world would have their own regulations and guidelines. The UK now has the ICO. For the US, we only have to go back to October, when the White House issued its guidelines for AI.

The last time we counted, I think, there were around 160 best practices, guidelines, and regulations for AI, and the number continues to grow. So, a very big challenge is, if you are, let’s say, a company in Europe, and you want to work in the United States, in the Far East, in Canada, how many of these guidelines and regulations do you actually have to comply with? And that, for me, is the biggest challenge for the moment. I was just at the OECD Global Partnership on Artificial Intelligence (GPAI) in Tokyo, and one of the big discussions there was around the horizontal regulations that we see, for example, from the EU, and the vertical regulations that we at the same time have in sectors like health care, life sciences and finance, e.g., the medical device regulation (MDR), or different financial acts, e.g., SR-11-7. There is also now some confusion, and there will certainly be for some years, for companies around the question of if I comply with one regulation, do I also comply with the other? Or do I still need to comply with both? I think this will cause a lot of headaches for people who want to stay compliant in the coming years.

Behrens: Your website states that one of your goals is to help businesses implement AI “responsibly.” How do you define “responsible” AI implementation, and what does it entail?

Munck: That is a very good question in relation to what we just talked about. What we see for the moment is a lot of regulations which are not optional, at the same time, we also see more and more ethical principles and ethical best practices in the field of AI. When we talk about responsible AI implementation, we talk about these in combination. We also see that most of our clients do want to combine compliance with the law with some ethical criteria in the way they work, deploy or use AI and other advanced technologies that all have this—and now, for a moment, I’m going back to the answer around how the EU looks at regulating AI and human impact. The point here is that we, over time, must be prepared to widen the scope to include all technologies which have such human impact.

Our focus is certainly AI, and that’s also where we see most of the regulation coming out. We must, however, be prepared that within the EU, there will be laws that will regulate other technologies that are less inexplainable and not as hard to understand as AI. AI regulation is just where this has started.

We offer a complete solution supporting all global regulations, ethical guidelines, and bespoke best practices if you need to implement such, which gives you as a company the full array of regulations out-of-the-box and the freedom to pick and choose in addition what is right for you using AI responsibly.

Behrens: What are some of the unique, unexpected risks that are associated with AI systems but which do not occur with conventional algorithms?

Munck: The big issue here is typically bias. What we see is that working with machine learning and AI models, the bias actually develops based on the data set that you train these models on. If there is some bias in the data set that shows a trend in a certain direction, the model will certainly also inherit such bias in the way it predicts and operates in the setting of its day-to-day usage. And that’s a big challenge. The new AI bias law in New York City is only the beginning of a very long journey of very specific regulations for AI of the likes of bias.

Behrens: You’ve warned about the emergence of “shadow AI” in organizations. What is shadow AI, and why should organizations address it?

Munck: The term shadow AI actually stems from my chairman of the board, Peter Søndergaard‘s definition of shadow IT while he was at Gartner. This term was widely used amongst organizations where business units were doing their own IT projects and buying their own IT. And the point at that time was that the risk that the structure, organization, and usage of IT would be less efficient if no one had a total overview. Today it is more acceptable that businesses actually run their own IT projects to stay more agile. If you have shadow AI, it is a different story. You need to have an inventory of all models (not just AI and machine learning) that you’re using within your organization. And that goes for the models you have developed yourself, the models you have bought from third-party vendors, and the models that are embedded into third-party systems.

It is not necessarily a trivial task to map that out. Let’s say that you have an AI model running in your HR department that is biased towards certain types of skills, then you have a big challenge on your hands. This model inventory is a good starting point for many when we talk about AI model governance, where the end goal is to ensure compliance towards regulations, ethical guidelines, and other best practices, a full journey we take our clients on with our GRACE platform.

You may also like

Show Buttons
Hide Buttons