The Center for Data Innovation spoke with Priti Padhy, CEO of Cognino AI, a UK-based startup that uses “explainable AI” to help businesses make decisions in highly regulated industries. Padhy discussed why he thinks explainable AI can democratize the field of artificial intelligence.
Gillian Diebold: What was the catalyst for founding Cognino?
Priti Padhy: Current-generation AI are extremely powerful, able to perform speech recognition and sentiment analytics, power driverless cars, and more. But there are some fundamental flaws in the statistical learning AI. There is little contextual understanding, and none of the decisions can be explained. The core of our thinking was that we wanted to redefine AI and the common approach to building it. We were very much aware of the fact that this redefinition of AI in general is not going to come from large organizations, it had to come from people who are going to go in a very different direction. It had to be started with the conviction that current AI do not give us all the foundational elements to truly rely on it—such as explaining what data an algorithm takes and how the algorithm arrives at a decision. If you cannot really explain it, you can’t use it for anything, especially in health and financial services.
Our core motivation was twofold. The current world of AI is limited, and it doesn’t have the building blocks of contextual knowledge building and intelligence. The second is it cannot be explained by most people. So, we wanted to redefine that.
Cognino is an AI-first organization. We have a clear purpose to democratize AI and we are truly mission-led, looking to transform data into intelligence. We have built a unique explainable AI engine that adapts, learns, and explains outcomes from a vast amount of data and creates contextual knowledge and intelligence.
Diebold: Cognino creates “explainable AI.” What does that mean?
Padhy: Explainable AI is a set of processes and methods that allows humans to understand and trust the results or output created by machine learning algorithms.
Explainable AI helps characterize model accuracy, fairness, transparency, and outcomes in AI-powered decision making. Our engine uses generative models that builds contextual knowledge on a topic, and provides a ready-made AI cloud pre-trained with contextual data from a wide variety of sources. The AI cloud has working knowledge of business concepts and can identify opportunities and mitigate business risk. On top of this, Cognino’s products can be deployed in all major clouds, and can be deployed on premise and in containers.
Diebold: Why does a business need an explainable AI system?
Padhy: Explainable AI is extremely important to understand why the engine made each decision. This adds another key layer of justification for the decision-making outcomes.
In my opinion, it is imperative for an organization to have a full understanding of the AI decision-making processes and accountability of AI. Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks. This aids the justification of any decision making which is extremely important in highly regulated industries. It helps to mitigate compliance, legal, security and reputational risks of production AI. Which results in justified decision-making without compromising on performance.
Diebold: How can explainable AI help democratize the field of AI?
Padhy: Explainable AI allows us to interface with machines using the same semantic understanding our brains use to create causation through context. This is invaluable for users of a product and for people to trust machines in their decision-making processes. Explainable AI allows highly regulated industries to adopt AI technologies into their day-to-day processes, making AI more accessible than it currently is. The potential of explainable AI is immense, so over time this will be fine-tuned into different products and verticals to cover this.
Diebold: One of Cognino’s products is an Early Warning System for supply chain risk. Can you explain some of the technology behind the system?
Padhy: Cognino’s AI engine takes a vast amount of data, combines and links various points together, then creates context and explainability through cause and effects. Our engine leverages technology that currently is not being applied in many places and can be used to create early warning systems using causal inference like never before.
Early warning systems can identify blind spots and help you mitigate risks before they become threats to your supply chain. Using contextual AI, millions of connected entities, news, and events can be used to predict developing threats by understanding how they impact one another. For example, if there are earthquakes in Chile, South America, how does this affect an oil rig supply in Siberia? Our engine can predict the repercussions of various live news and events through causal inference. This can be extremely useful for supply chain disruption, production efficiency, early warning systems, and more.