Home PublicationsData Innovators 5 Q’s for Frantz Saintellemy, President and COO of LeddarTech

5 Q’s for Frantz Saintellemy, President and COO of LeddarTech

by Eline Chivot
by
Frantz Saintellemy, President and COO, LeddarTech

The Center for Data Innovation spoke with Frantz Saintellemy, president and chief operating officer of LeddarTech, a Canadian company providing remote-sensing capabilities to improve the safety of autonomous vehicles. Saintellemy discussed how LeddarTech’s algorithms process a high-resolution 3D model of the environment created by fusing raw data from multiple sensors, such as cameras and LiDAR, rather than separately processing data from each sensor, to improve safety of autonomous vehicles.

Eline Chivot: Why did you join LeddarTech, and what challenges are you trying to solve?

Frantz Saintellemy: I’m a by-product of the tech industry in the sense that I have always been attracted to technology in one form or another. In late 2014, the company I was working for, ZMDI—based in Dresden, Germany—wanted to get into the Light Detection and Ranging (LiDAR) space, as we felt it was a core technology that we lacked. We discovered several companies based in Canada and started initiating contact. One of the most exciting companies was LeddarTech. At first, the conversation was centered around a collaboration with LeddarTech, however when ZMDI was acquired by IDT (now Renesas) in December 2015 the discussion was paused. In 2017, while at IDT, I led an investment by IDT into LeddarTech. After leaving IDT, I decided to join LeddarTech to help scale the company, enabling it to deliver compelling solutions to the market.

What I found particularly appealing was the ability to participate in the transformation of the automotive industry with LeddarTech being one of the leading LiDAR companies in the world. Advanced driver-assistance systems (ADAS) and automated vehicles have the capacity to save lives and democratize mobility for the masses, and LeddarTech’s technology contributed to that. I find it meaningful to contribute my experience towards making the technology more ubiquitous, reduce the barriers of entry, and to make it more affordable and accessible for the vast majority of automotive and mobility companies, especially since LeddarTech’s technology can significantly reduce the cost of LiDAR development and deployment.

LeddarTech is quite different from other companies in the sense that, from the onset, it was developing and commercializing solid-state LiDAR (SSL) technologies which leverage unique signal acquisition and signal processing technologies that are integrated into a suite of system on chips (SoC). In contrast to traditional mechanical LiDARs, SSLs are more reliable, more integrated, and digital by design. In our case, we leverage semiconductor technologies combined with unique software to deliver an extensible architecture that can scale from Level 1 to Level 5 autonomy. LeddarTech’s original core expertise is signal acquisition and signal processing. Our LiDAR technology is unique as we use a full digital waveform approach to digitize the entire waveform and apply unique signal processing software techniques to realize better detection and more robust classification, which ultimately increases range and effective resolution. For a customer, this translates to lower solution costs whilst providing more reliability and ease-of-use. With the recent acquisition of VayaVision, we have expanded our solution offering with the addition of raw data perception and a sensor fusion software stack that is based on an open architecture which is sensor, processor, and operation system (OS) agnostic. The combination of LeddarTech and VayaVision enables us to provide radar, camera and LiDAR perception and sensor fusion solutions to customers who are already addressing ADAS/AD (autonomous driving) Level 1 to Level 5.

Chivot: How do LeddarTech’s technologies and innovations work to support autonomous driving, and how can they do so for a wide variety of vehicles to increase accessibility and cost-efficiency?

Saintellemy: In the automotive environmental sensing segment, most technology companies are typically vertically integrated. A perfect case in point is Intel’s Mobileye. They come from a camera vision approach and have created a comprehensive, integrated solution that allows the original equipment manufacturer (OEM) and the Tier 1 providers (which supply components to OEMs) to develop a Level 1 solution (Editor’s note: Level 1 autonomous vehicles require driver assistance). They have also developed a solution for Level 2 (partial automation) and another for Level 3 (conditional automation). This works exceptionally well if you’re thinking in a “dislocative” fashion. But what the industry truly wants and needs, is expandability and extensibility from Level 1 to Level 5. What this means is that, ideally, you want to be able to scale from what you’ve built for Level 1 to Level 2 without having to revalidate and reverify all the features built for Level 1, because that would be inefficient. Similarly, when scaling up to Level 3, you do not want to have to revalidate all the features developed for Level 2.

In sum, transitioning from Level 1 to Level 2 is a small jump as it is more or less limited to adding more features to a pre-existing camera-and-radar-based system in order to increase safety and offer smoother functions; there is no redesign required and thus, no dislocation.

However, the jump to Level 3 is tectonic as you’re adding LiDAR sensing technology and the complexity increases by orders of magnitude. Having to verify and validate the entire system again becomes a challenging and a cost-prohibitive process that requires significant resources and time, thus limiting Level 3 ADAS feature introduction and the large-scale market deployment of LiDAR.

LeddarTech’s solution provides that extensibility. Whatever feature you’ve developed and whatever sensor configuration you have from Level 1 to Level 2, when you need to add Level 3 features that require LiDAR, you do not need to validate all previous features. You just need to validate the new ones. Our technology provides extensibility to reduce complexity, provide more feature scalability, modularity, and reduce overall development time and system costs. That is the value of our LeddarEngine and LeddarVision technology offering.

Chivot: As a platform, LeddarTech brings together technologies, products, and expertise. Can you give an example of how this combination can help add value to the development of autonomous driving?

Saintellemy: Think of a vehicle that’s equipped with LiDAR sensors, radars, cameras, GPS, IMU (inertial measurement units) and ultrasonic sensors. This vehicle is potentially generating terabytes of data. Processing that amount of data is not impossible, but it’s a big ask, and it’s not efficient. There is a need to develop more efficient and accurate 3D environmental models that enable the vehicle to make precise decisions very quickly and detect and understand a scene as quickly as possible so that it doesn’t have to process even one terabyte or more of data in real-time. In the real world, in a real-time environment, the more data you must process, the more complex the scene is, the greater the need for efficiency.

The commonly used sensor fusion architecture is object fusion, meaning each sensor has a separate cognitive engine which detects and classifies the scene. The output from each sensor is then integrated or fused into a coherent model, called the environmental model. This approach comes with performance limitations in terms of safety and driving comfort (too many missed detections and numerous false positives). The current cost structure of self-driving systems demonstrates its inadequacy for large scale commercialization.

LeddarTech’s LeddarVision sensor fusion architecture uses raw data fusion, meaning raw data from all the various sensors is fused together. Detection and classification algorithms are run on the fused model rather than on each sensor separately. Functional safety is also improved when properly implemented. This raw data fusion, combined with patented algorithms, delivers a richer and more robust high-resolution RGB-D (red, green, blue, plus depth) 3D model with less false positives as each sensor’s advantages complement the others. Perception is then performed on this high-quality 3D model. This has an added benefit of lowering the cost structure due to a leaner architecture and savings on 3D sensors as well as on-sensor processing. Using our technology, we can recreate a robust and accurate 3D environmental model with less data and less processing. This is significant when thinking about cars that include 15 or 20 sensors each collecting data; without an efficient architecture you end up with a very complex system that is expensive, and difficult to validate and scale in commercial deployments. Combining VayaVision’s raw data fusion and LeddarTech’s full-waveform processing technologies creates a much more efficient, safer and less expensive system that can be scaled.

Chivot: The robustness and reliability of technologies for automated driving tend to receive mixed press. How is that different with LiDAR-sensing technology? Where is this technology heading, in terms of development, deployment, and use?

Saintellemy: Automated driving is a long marathon with multiple sprints. We divide the market between passenger cars and mobility applications. Mobility is essentially anything that moves people or goods. In that context, shuttles, robotaxis, automated delivery or ground vehicles, and off-road vehicles (such as those used in construction, mining, agriculture and forestry), all these types of vehicles are Level 4 (high automation) and Level 5 (full automation). Such mobility applications generally operate at lower speeds and in controlled environments (geo-fenced). Such controlled and predetermined environments offer lower risk and reduced liability.

In contrast to that, the passenger car market is expected to operate on any road in any condition. The passenger car owner expects the vehicle to operate seamlessly on roads from Germany to Belgium and from Canada to the United States—in any unknown environment to the vehicle. Therefore, the complexity is far greater. The step function (going from Level 1 to Level 2 to Level 3, and so forth) adds incremental complexity, which makes it much easier to digest.

Of course, today’s technology is not yet reliable enough due to limited deployment. But over the next five to ten years, there will be significant efforts to deploy more automated vehicle technology, and as with many other technologies, the more you use it in real-life use-cases, the more reliable and safer it becomes. As the end-user becomes more familiar with it, people will treat autonomous driving and vehicles as the new normal. Think of people who have an automatic emergency braking system in their cars today—they could never do without it because once you get used to it, you realize how much safer the vehicle is. If you make a small mistake and you don’t see the car parked in front of you, the vehicle will recognize the situation and use the emergency brakes on your behalf. This feature and other similar ADAS functions such as automatic lane changing, traffic jam assist, traffic jam pilot, highway assist, or highway pilot are part of the automated driving stack. These features will enable us to drive much more safely and encourage us to drive vehicles with technology that will eventually become widely deployed whilst being significantly more robust and safer.

Chivot: How are autonomous vehicles likely to change the automotive sector and accelerate other existing trends that you can already identify? What could stall or fast-track their use and growth?

Saintellemy: The world is forever moving towards electric vehicles (EVs). Will they be hydrogen fuel-cell vehicles or lithium battery electric vehicles? The energy source is secondary at this point—the crucial thing is that the decision to focus on EVs has been made by all car manufacturers, and there is no turning back now. It may take us 30 years before we get rid of all combustion, diesel, or gas engines, but eventually, we will. The next step is shared and automated mobility. Electrified vehicles can also be automated vehicles. Either way, what will change vehicle ownership in the traditional sense is that need will inevitably decrease. Fewer vehicles will be produced because we will use our cars more efficiently. I’ve got two cars parked in my driveway that are collecting dust—that’s inefficient and costly. With shared and automated mobility, there will be different business models and different dynamics. We see an acceleration in the electric vehicles industry, we’re expecting to see, through 2021 and beyond, a significant uptake of automation across the globe, in shuttles, last-mile delivery, autonomous delivery, short and long-haul delivery, robotaxi, and various off-road vehicles in agriculture, mining, forestry, and construction. There is no going back. This trend is only going to accelerate. Ultimately, I’m optimistic that in the not-so-distant future, we will more efficiently use vehicles and will be able to significantly reduce our greenhouse emissions with the adoption of electrified and automated vehicles.

You may also like

Show Buttons
Hide Buttons