The Center for Data Innovation spoke with Alon Goren, founder and chief executive officer (CEO) of AnswerRocket, a data analytics company that uses generative AI to help businesses make data-driven decisions, such as those about budget allocations or price strategies. Goren discussed how large language models (LLMs) helped the company expand access to data insights for many enterprises and his long-term goal to expand the analytical capabilities of AI assistants to boost business decision-making.
Martin Makaryan: What was the inspiration behind the company?
Alon Goren: I founded AnswerRocket about 10 years ago with the mission to help organizations make better use of their data for decision-making. The inspiration came from my frustration of knowing that data was available but getting answers to follow-up questions often required lots of effort and staff time. For example, if you have a specific question about last month’s sales report, you would have to go to analysts and down the company hierarchy and wait days or even weeks for such custom queries. I saw a potential to automate that, and our core idea was to use machine learning to turn this process around, allowing businesses to ask questions conversationally without needing to know how to write long or detailed queries.
Over the years, especially with the release of several LLMs, we have been able to expand our capabilities and assist enterprises in efficiently accessing data insights and using them to expand their operations or become more productive. We have evolved from building custom solutions for a few key enterprises to offering Max, our AI assistant, that can answer questions about a client’s data and help make sense of analytical insights that they have already generated through our platform. Recent advancements in generative AI have also made it easier to use other sources of information, like unstructured data, to inform business decisions. Structured data fits into predefined categories like spreadsheet columns, while unstructured data comes in varied formats like images, social media posts, and audio recordings that do not fit a fixed template. We have worked to use cutting-edge LLMs to make sure every source of data helps inform a client’s business decision.
Makaryan: How has deriving insights from data evolved over the past decade?
Goren: The key challenges in data management have not changed significantly. You still need to start with good data and go through a process of refining and validating it to make it useful for business decision-making. However, what has changed is the way we approach these challenges.
We can now work with a broader range of data types and apply more automation to the cleaning and curation process. For instance, connecting different data sources with inconsistent naming conventions or granularity levels was a manual task for data engineers. Now, we can often use LLMs to assist in understanding the context and semantics of different data sources to save time and energy. The ability to process unstructured data has also improved dramatically. We can now extract knowledge graphs or property graphs from thousands of documents and connect them back to structured data much more easily than before.
Makaryan: How do you use AI in your analytics platform?
Goren: We have integrated AI, particularly LLMs, in multiple ways. We use LLMs to understand questions our users want to know about their business, including from the context of ongoing conversations. We also use them to explain data and charts in a way that anyone without technical or subject matter expertise can understand and make informed judgements. This is especially useful for business leaders and executives, who may not have extensive background knowledge in all aspects of their business.
In both contexts, we use techniques like prompting and classification to extract relevant information from user queries and match it to available data. We carefully provide the LLM with the facts extracted from our analysis to ensure it does not hallucinate. But we also use AI in our “black box”—the analytical engines we have built in our Skill Studio, which allows our clients to customize our AI assistant for their job-specific tasks. These boxes perform deterministic, quantitative analyses that produce consistent results for the same inputs.
Makaryan: What trends do you see in the transition to more data-driven business models across various industries?
Goren: I think we are at an interesting point in time when AI models are improving very fast and increasing their capabilities and accuracy every year. These rapid improvements offer an opportunity to handle increasingly complex business tasks that previously seemed years away or were simply too expensive for businesses. But every boardroom and CEO is now under pressure to figure out how to apply generative AI to maximize growth or efficiency. This has led to a high level of interest and desire to experiment with pilots and trials.
However, translating pilots into working applications that replace existing workflows is challenging. It requires deep understanding of current processes and tools inside an organization, and often involves re-engineering across multiple parts of an organization. We are seeing adoption in areas where the benefits are obvious and immediate, like routine administrative tasks that are several times faster with AI than before. But changing end-to-end workflows that span multiple departments and people takes more time and energy. That is why I think we are still in the early stages of defining big levers for generative AI, not just incremental projects, and this process requires engineering to fit into existing workflows.
Makaryan: What is your vision for AnswerRocket in the near and long-term future?
Goren: In the near term, we are balancing research and product development with service capabilities to help customers adapt our solutions. This balance has become more of a priority in the last six months as interest in adopting AI to make data-driven decisions has surged. Looking further ahead, we see a shift from passive question-answering to more proactive, in-depth analysis. Instead of providing quick answers based on data lookups, we envision AI assistants that can perform thorough research, test hypotheses, and provide thoughtful, deep-dive analysis that could be worth thousands or tens of thousands of dollars to a company. This will likely require significant human-in-the-loop interaction to educate the AI system on the context and preferences of each organization. The goal is to guide the AI assistant rather than constantly explaining everything from scratch. Essentially, we are working towards a future where AI can take on more of the deep analytical work, providing recommendations and options for business leaders while minimizing the need for constant human input.