The Center for Data Innovation spoke to Gary Brotman, CEO of Secondmind, a machine learning company helping the automotive sector achieve sustainability. Brotman discussed how Secondmind uses a combination of probabilistic machine learning techniques to enable automotive engineers design cleaner cars in less time as the industry navigates the transition to electrification.
Benjamin Mueller: What was the genesis of Secondmind—how did you come to the conclusion that there was an opportunity to put AI to use in the automotive industry?
Gary Brotman: Secondmind was founded five years ago. The premise of the company when it first started was around decision-making. Broadly speaking, machine learning and AI enable decisions. Our original idea was to abstract a lot of the complexity and the math that goes into bounded decision problems to enable broader decision-making capabilities around complicated business optimization scenarios.
Secondmind spent a number of years doing deep research around various industries. We covered everything from gaming, fraud detection, finance, to supply chain optimization—and automotive. We did quite a bit of exploration to understand where the opportunities were in applying the technology, and where we can have the biggest impact. At our core, we do probabilistic modeling; we’re not a deep learning company. We focus on Gaussian processes and probability in use cases where data is sparse, or there’s a need to have a very clear understanding of what an uncertainty measure would be with hard and fast boundaries. We combine this with Bayesian optimization have found these techniques are well-suited for engine design and specific mechanical processes in automotive engineering. We were fortunate enough to have the opportunity to work with Mazda, which has the most complex engines in the marketplace today. They have focused a great deal on data and analytics and were the pioneers on model-based design in R&D. Mazda saw what we were doing and understood our tools could have a demonstrable impact on the time that it takes to calibrate an engine in production.
Many startups build a technological tool and try to turn it into a platform that’s extensible and versatile and can solve many problems in many industries. The key for us was finding an industry where we can make a very big impact, then focus and go deep. All the other industries gave us learnings, and now we’re hunkered down in automotive helping optimize the mechanical processes or workflows, such as powertrain calibration. Early indicators are that our technology can compress time to production, while minimizing the utilization of materials in R&D.
Between now and when there’s a perfectly electrified future, there’s a lot of opportunity to optimize along the way. And we’re here to help with existing use cases as well as accelerating the journey to pure electric.
Mueller: In technical terms, how does Secondmind apply machine learning to car manufacturing?
Brotman: The process of calibrating an engine is extremely complex. When you’re dealing with an internal combustion engine, like the internal controls in an engine system, or the electric side of a hybrid engine, the most complex part is the existing internal combustion engine. So we’re looking at the overall powertrain, and when manufacturers like Mazda calibrate that they take into consideration a variety of different physical parameters of the engine itself, and then a number of constraints like fuel efficiency, or emissions thresholds. The goal depends on the manufacturer’s high-level objectives. The objective could be torque, it could be staying under an emissions threshold or reaching a certain fuel economy target. What we’re very good at is when you need to handle a high number of parameters and constraints, meaning you wind up with millions of different potential experiments to run to get to the right setting for a particular objective. Legacy approaches to calibration would be to take the entire engine data search area and manually run a quadrant-by-quadrant grid search, or make predictions to identify the right regions to experiment. With Bayesian Optimization, we employ an active learning approach to the design of experiments process that automates the data identification, acquisition and modeling process. We are able to more precisely pinpoint promising regions to search, generate settings and test them. The benefit of using our solution is that you need a fraction of the data to reach the right setting, so the amount of time is significantly reduced. So you wind up with time savings, energy savings, and a reduction in the number of engines needed in the testing process.
Mueller: In your experience of deploying models, do you see the need for client-side expertise concerning software engineering and implementation, or do you feel that “out of the box” solutions are the way forward?
Brotman: We try to scale as much around the most fluid and volatile part of the machine learning pipeline, which is the data and the modeling—that’s where we try to find as much efficiency as we can. The use cases where patterns in data are easy to recognize are starting to become pretty simple and can be automated relatively quickly when you’re dealing with consistent data sets. When we deal with things like dozens of different parameters in an engine, or specific constraints that bind you, the model engineering becomes a little bit more bespoke. Even when you have similar data sets, you’re going to find variance. So we have to build enough flexibility into the business while being as deliberate as possible to scale and to harden everything else in the pipeline so that our delivery and monitoring is as rock-solid as possible.
Our audience is pretty technical. Test engineers and folks on the production side are our peers. But even then, with the current tools that they have, what happens under the hood with a machine learning model isn’t necessarily their expertise. So our product is abstracted in a way that the client can utilize our tools without having to be an expert in whatever machine learning is used underneath.
That’s always been our approach. We focus on the human-machine learning interface. Even if the user on the other side is savvy, they appreciate a way to do something easier and faster and want it to be compelling. We believe that being in a business-to-business setting doesn’t mean that you can’t aim for customer delight. So we invest quite a bit in user experience because whether there’s a user interface or not, the user experience is what can make or break the product. When you’re talking client-side expertise, we want to make sure that the process that the end-user goes through is one that we can improve, and demonstrate this improvement without them having to go to a computer science class to understand what’s happening.
Mueller: What are some of the advances in AI and ML in the coming decade that will be most impactful in the field of applied or industrial AI?
Brotman: In industrial settings, the application of AI is becoming less about the cloud as a central mechanism or control point, and distributed compute and distributed intelligence is growing in relevance. So the capabilities at the node—mostly inference—the robustness of the computer node can allow for discrete model training and then more robust model training. So that combined with 5G connectivity is going to blow intelligence out the door. We will see distributed intelligence in terms of compute data and software. You can go deep into novel architectures for processing neural networks, we will see some of those elements, and improvements in compute using Von Neumann architectures as well, such as tinyML on Arm CPUs. They are becoming quite common. So I think we are going to keep seeing incremental advances in those areas. I don’t see anything that’s going to be a step-change overall where you’ll see “hockey stick” growth or a radical shift in direction. AI is just going to become more commonplace and it’s going to become easier and cheaper to deploy.
With machine learning—whether you’re talking about deep learning or any other technique—the process of training models and running for inference is a compute-intensive process. That means energy use, and energy means fuel, and fuel today means emissions. So I think the areas where we’ll probably see more attention over the next decade will be in trying to maximize the compute efficiency, while still being able to get the result that you’re looking for. So, how can you take a full float model and bring it down to half-precision, or quantize and get it down to integer maths. There will be more ways to optimize, either through the hardware architecture or through the way in which the models are actually trained. We’re all responsible for making these processes more efficient and sustainable.
Mueller: AI tends to generate controversy in consumer-facing areas when, arguably, its major current impact is ”behind the scenes” in industrial applications. In your view, how will manufacturing and engineering change as AI is developed and rolled out further across different sectors?
Brotman: In terms of industrial AI applications, it’s not sexy but it’s the standard efficiency gains: saving time, saving materials, ultimately saving costs, and that will spread irrespective of the industry you’re in or the type of company you are. Your stakeholders, whether they be customers, investors, or partners—everybody expects you to be responsible regarding sustainable practices. The savings of time and the savings in materials and the increase in efficiency will ultimately lead to that. There’s a business motivation to seek sustainability because it makes your business healthier. Agnostic to industry, I think that’s where it’s headed. It’s just good business.