Home PublicationsData Innovators 5 Q’s for Tim Tuttle, CEO of Expect Labs

5 Q’s for Tim Tuttle, CEO of Expect Labs

by Travis Korte
by
Tim Tuttle, CEO of Expect Labs

The Center for Data Innovation spoke with Tim Tuttle, the CEO of Expect Labs, a voice software company that makes the MindMeld voice recognition application programming interface (API). Tuttle discussed his vision for the future of voice-driven applications and detailed the “knowledge graphs” that underpin MindMeld’s technology.

Travis Korte: First, can you introduce MindMeld?

Tim Tuttle: The MindMeld API is the first developer platform capable of powering intelligent voice experiences for any app, device or website. Companies use this platform to create voice-powered assistants that use artificial intelligence to understand what users say and automatically find the information they need before they have to type a search query.

To date, only companies like Google and Apple have been able to deploy similar technologies to that of Expect Labs’ anticipatory computing technology. We believe this technology should be accessible to all developers and their users, and that by providing an API to easily plug this functionality into any web and mobile app, anticipatory and AI innovations will advance at an increased rate.

Korte: Who do you see as your users for this platform?

Tuttle: Expect Labs’ customers include mobile app developers in online commerce, entertainment, and customer support, as well as device manufacturers, connected home providers, and smart car companies. Several hundred companies and developers are already using the API to power a wide range of functionalities, such as context-driven intelligent assistants for mobile applications; improved website and app search using contextual cues in addition to keywords; advanced real-time communication and collaboration applications; and voice- and context-driven dashboards for sales support, help desk, telemedicine, online education, and other knowledge worker applications.

Korte: It looks like there’s a lot going on under the hood. Can you speak a bit about the knowledge graphs you let users build and how they improve predictions?

Tuttle: In short, we crawl websites to build a knowledge graph. Once the graph is built, we listen to what humans say, run it through the knowledge graph, and then we have a statistical model that gives the user the best answer for what they’re looking for. Just like a learning human brain, our graphs get smarter as more information is fed to the database.

Korte: The first application you’re thinking of for MindMeld is content search. Are there other areas you’d like to apply voice control in the future?

Tuttle: Yes. Content discovery such as videos; commerce discovery while in-stores versus waiting to check once you’re back home; other applications for when your hands are busy such as walking, biking, driving, cooking; interacting with wearables, which often don’t have large or any touchscreen.

Korte: I can see this technology being helpful within companies or government agencies who want to search through their data with voice. Are your end-users exclusively consumers, or are you working on enterprise applications as well? If so, can you speak generally about some of the applications you’re seeing?

Tuttle: Both. Intel Capital, Samsung Ventures, and Telefonica Digital are all investors and we expect that in the next 6-12 months they will also be customers; In-Q-Tel is also an investor. All of these organizations have invested in Expect Labs with interest in enterprise applications. Whether is be smart assistants in cars, next-generation call centers for customer service agents, or contextually aware data queries in massive government databases, the MindMeld API has a variety of both consumer and business applications.

You may also like

Show Buttons
Hide Buttons