CEOs of autonomous car companies believe a fully self-driving car could be only months away. Many companies predicted we would have autonomous cars by now. There’s real money behind these predictions. These bets are made on the assumption that the software will be able to catch up to the hype. On its face, full autonomy seems closer than ever. Tesla and a host of other companies already sell a limited form of Autopilot, counting on drivers to intervene if anything unexpected happens. There have been a few crashes, some deadly, but as long as the systems keep improving, the logic goes, we can’t be that far from not having to intervene at all.
But the dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades before self-driving systems are reliable. Experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectation, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation.
It’s easy to see why car companies are optimistic about autonomy. Over the past ten years, deep learning has driven almost unthinkable progress in AI and the tech industry. But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter. Systems like Google Images, for instance, are great at recognizing animals as long as they have training data to show them what each animal looks like. Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.
What’s the problem?
The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot. Even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills. Researchers thought they could improve generalization skills with the right algorithms. But recent research has shown that conventional deep learning is even worse at generalizing than we thought. One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video.
Deep learning isn’t the only AI technique, and companies are already exploring alternatives. Many companies have shifted to rule-based AI, an older technique that lets engineers hard-code specific behaviors or logic into an otherwise self-directed system. It doesn’t have the same capacity to write its own behaviors just by studying data, which is what makes deep learning so exciting. But it would let companies avoid some of the deep learning’s limitations. It’s hard to say how successfully engineers can avoid potential errors.
Still, it’s not clear how long self-driving cars can stay in their current limbo. Semi-autonomous products like Tesla’s Autopilot are smart enough to handle most situations. But they do require human intervention if anything too unpredictable happens. When something does go wrong, it’s hard to know whether the car or the driver is to blame. But with deep learning sitting at the heart of how cars perceive objects and decide to respond, improving the accident rate may be harder than it looks.