Home PublicationsData Innovators 5 Q’s for László Kishonti, Founder of AImotive

5 Q’s for László Kishonti, Founder of AImotive

by Nick Wallace
by

The Center for Data Innovation spoke to László Kishonti, founder and chief executive officer of AImotive, a Budapest-based company developing artificial intelligence systems for self-driving cars. Kishonti discussed the challenges remaining for developing autonomous driving, and how market forces might shape the future of human driving.

Nick Wallace: There’s a lot of talk about autonomous vehicles and a lot of investment by tech firms and car manufacturers, but they are still not available. What puzzles are there left to solve for this technology to take off? How long do you think it will take until we can buy these from car dealerships?

László Kishonti: There are two puzzles. One is the regulations. In the United States, Congress will hopefully reach a conclusion soon about what is possible and what is not—whether to allow real self-driving cars without test pilots. If they decide to allow this, then the United States would be the first country in the world to have such a regulation.  No country allows the sale or use of self-driving vehicles, at most they only allow testing. So we’re very excited to see what will come out of the United States.

I think none of the politicians or regulators quite know what they want to do or how this market will behave, because it is obvious to me that none of the big companies investing in this have technology that’s scaleable. Google, maybe Uber, some car companies, and some other technology firms like us have technology that works and has been tested in certain regions, but I don’t know any company that is able to do a global operation in self-driving cars right now.

If you as a consumer want to buy a self-driving vehicle, you still have a while to wait, not just because the technology isn’t ready, but because no-one has tested it at the necessary scale. Google has done something like 2 million miles in Silicon Valley, but I’m almost sure that if you ask that car to take you from San Francisco to San José—which is probably the most tested part of their operation—it wouldn’t be possible.

Secondly, the computation requirements and the sensory requirements are either not ready for mass-market use or simply not available. For example, on Uber cars, the unofficial information says that the cost of the setup is more than US$200,000, which is at least four times as much as the car in which they’re deploying it. And I don’t think even that set-up has the full processing capabilities necessary.

Our company focuses on AI processing, and this technology is relatively new. One of the reasons we developed our own chip design for AI processing is because there is no such thing available on the market. In most of the cars—including ours—we use large Nvidia chips, which are currently the best solution available. But these chips were designed for graphics, not AI processing, and so the processing capacity needed is either very expensive or consumes a lot of power. So it is not ready for the mass-market. We think that our chip design, and other competing designs, will be available in the next two years. But at the moment the only thing you’ll see, even when regulations allow self-driving cars, is prototypes. For the mass market, I think it’ll be another five years. Of course, highway autopilot systems are becoming available now, and that’s a great achievement, but I think fully self-driving cars  are still five years away from mass-market production.

Wallace: Given the very high stakes involved in driving, what does it take to develop a computer system that never glitches or freezes? Are there other scenarios we can learn from where software has to meet that standard?

Kishonti: This is largely a cost issue. In our car, we have two computers, the main one that makes all the decisions and one that is just for “babysitting,” a simpler one that evaluates whether the decisions made by the main computer are safe. If the babysitting computer recognizes an irregularity with the main one, it will slow and stop the car in a controlled way. For example, the main computer always provides a “heartbeat” signal to the other one, so if there is no heartbeat, that probably means the main computer has frozen or stopped working. Similarly, if the large computer tries to execute a strange steering maneuver at high speed, the babysitting computer will take over and slow down the car, expecting a malfunction. The babysitter is not capable of fully processing all the sensor data and making its own decisions about driving, it’s just a failsafe, a gatekeeper.

But in more expensive systems, such as those in airplanes, there are two parallel decision-making computers, possibly even from two different vendors to limit the risk of them sharing the same hardware or software bugs. I think this will be the solution for commercial autonomous vehicles, because they can be much heavier and more powerful than passenger cars, so the higher costs of having two parallel computers, with the failsafe as the third, are more justifiable.

Wallace: Communication between autonomous cars is important to manage traffic flows, but what about communication between autonomous cars and human drivers? For example, how will a car without a driver replace the wave of a hand when giving way?

Kishonti: If from today all new cars sold were fully autonomous, then it would still take at least 11 years in a developed country, and longer elsewhere, for all of the cars on the road to be autonomous. So for the next 30-40 years, non-self driving and self-driving cars need to share roads. I don’t think it is feasible for them all to communicate. But the situation isn’t as bad as it might seem. All of the cars have at least one passenger with a mobile phone. It’s a failure of the mobile network operators that they don’t use the data they have access to in order to help with traffic management. If you think about it, whenever you have a phone in your pocket, then your mobile network operator will know where you are. This could be used in a similar way to the vehicle-to-vehicle communication standards that industry and governments are working on. I don’t think it would be safe for the whole communication system to be based on those standards, because not all vehicles will use them—there would need to be something else, and I think mobile phones could provide a good solution.

Wallace: Anyone who has driven widely in Europe knows that it isn’t just the road conditions that vary, it’s the drivers too. Some places are just safer to drive in, not only because the roads are better but because traffic is more orderly, whereas other places are a little more bracing. Will issues like that affect the roll-out of autonomous vehicles, since they will have to share the road with both the good drivers and the bad?

Kishonti: There are multiple algorithms used in autonomous drivings at multiple stages, one of which is the recognition stage. That doesn’t change much from country to country: wherever you go, cars will usually have four wheels and pedestrians will be more or less the same shapes and sizes.

The next layer is tracking, where you track those recognized objects and try to forecast their behavior. Based on this tracking data, you can make an individual decision about what they might do. If you see someone walking in a nice straight line on the sidewalk, and they seem like a healthy adult, then you can expect him or her to continue in that direction, and you can keep up your normal speed. But if you see anyone—wherever you are—who is walking or driving with some variance in their movements that suggests they may be drunk, then the algorithm will forecast that the future probability that this person will make a strange movement will be higher, so you need to slow down as you get close.

Additionally, it’s true that driving styles vary from one place to another, even within the same country. In New York, everyone is much more aggressive than in Dallas. They use the horn every time there’s something they don’t like, but in Dallas everyone is more patient. You can adapt to that using reinforcement learning. It’s a neural network that learns from the behavior of the environment around it.

There is a methodology for adding-up these algorithms to account for these differences. You can test it a million times in a simulator to make sure the system can adapt to more aggressive drivers, as well as to more patient drivers, very similarly to how you adapt as a human. So there is a solution for the problem you describe: I don’t think anyone has tested this globally, but at least locally, it seems to work.

Wallace: Lots of people enjoy driving, so people may not be that quick to hand over control to a computer, even if they trust that it’s safe. What kinds of road traffic do you think will become autonomous first, and how do you envisage the future of human driving?

Kishonti: My daughter has been riding horses since she was four years old: she likes it, but I don’t think she would ever want to ride her horse on an open road, which was the standard way to travel 200 years ago. As soon as we have more data about the safety of self-driving features—and there is already some data to show those cars with automated breaking cause fewer accidents than those without—this data will have an impact on insurance companies and governments. Sooner or later, it will start to become more complicated to acquire a driver’s license. Eyesight restrictions might become stricter, for example, and the cost of getting a license will increase in terms of both time and money. Insurance premiums for drivers will go up to reflect the higher risk compared to autonomous vehicles. The rich might carry on driving for a longer time, but I think most cost-conscious people, who don’t have money to spare on something that’s actually just a hobby, will turn towards driverless cars. I think driving will be more and more limited to special race tracks, exactly like horse-riding today.

You may also like

Show Buttons
Hide Buttons