Aug 03 2021

Where’s My Self-Driving Car?

A lot of people have noticed that the self-driving car revolution has been…delayed. For the last decade predictions of when the technology would be ready for mass adoption were converging on the 2020s, beginning early in the decade. In this 2010 article, the prediction was – at least 8 years. Also, “US Secretary of Transportation Anthony Foxx declared in 2016 that we’d have fully autonomous cars everywhere by 2021.” Since then the technology has advanced tremendously, but has not quite crossed the threshold of fully autonomous vehicles. We are stuck in the “driver-assist” stage. Right now you can get a Tesla with the driver-assist package which you can use to summon your car from its parking space, and to assist during driving to help avoid accidents. But the driver must always be attentive and at the wheel. Fully autonomous driving is not yet a reality. What happened?

In retrospect it all seems completely predictable, because we have been here so many times before. This pattern does not necessarily happen with every technology, but it is extremely common, especially for new and complex technology. We have seen this with fusion reactors, artificial general intelligence, gene therapy, stem-cell therapy, the hydrogen economy, and flying cars. There are some common themes that keep cropping up. One is the tendency to overestimate short term progress, while underestimating long term progress. This pattern, in turn, results from some underlying tendencies and cognitive biases.

I think one of the most important is that we tend to default to extrapolating linearly into the future. So we think – if we have made this much progress between 2000 and 2010, then we should make similar progress between 2010 and 2020, and that’s when we will cross the finish line. The problem is, technological progress is not always linear. There is a more complex relationship, which can make net progress both faster and slower than we predict. This is because technological progress can be geometric, rather than linear. But at the same time, challenges can be geometrically difficult, so there is diminishing returns. These are competing geometric issues, and how they sort out can be difficult to extrapolate.

For example, we are all familiar with Moore’s law – the number of transistors on a computer chip should double about every 18 months. That is a geometric progression, and we have lived the last few decades assuming that computers will keep getting ridiculously more powerful. Our ability to sequence genomes and to make genetic alterations has also been increasing geometrically. This is why the human genome project finished two years ahead of schedule.

But at the same time, the problems we are trying to solve with technology can also get geometrically more difficult as we try to push success rates closer and closer to 100%. In many cases there are diminishing returns. This occurs every time safety is a prime issue, with very high levels of safety being required. I mentioned gene therapy. In the 1990s we seemed to be on the cusp of a gene-therapy revolution. But then a few cases of severe side effects, including a fatal viral infection in a test subject, derailed the entire technology for about two decades. We are just now getting back to where we thought we were then. It turns out, it’s hard to control what viruses do. They want to cause infections. We are running into the same problem with stem cells – they want to form cancers.

In the 1950s, technologists thought it was plausible that fusion reactors were right around the corner. Think about that, they were about a century off. The joke now in the community is that fusion power is 30 years off, and always will be. They kept extrapolating out linearly, but the problem of magnetic confinement of plasma is not a linear problem. The more you push it, the harder it gets. We may finally be getting close, but we’ll see.

So it shouldn’t really be surprising that we are running into the same issue with self-driving cars. This is a safety issue. As we push closer and closer to 100% safety (or zero accidents), the difficulty should increase geometrically, even exponentially. Imagine a football field where each ten-yard segment actually doubles in length, so those last ten yards are actually 5,120 yards long. It seems like we are so close, but getting that last measure of capability may take another decade. So now everyone is recalibrating their predictions.

I have little doubt we will cross the finish line, at least to some extent. Right now self-driving cars have amazing capabilities. They just have difficulty dealing with increasingly less likely and unusual circumstances, events, and conditions. Humans are more versatile and adaptable than narrow AI. Humans are bad at maintaining constant vigilance, and may be slow to react in some circumstance, or may even be cognitively compromised (sleepy, distracted, inebriated). Computers are excellent at maintaining vigilance, and can react extremely quickly. But narrow AI can be what experts call “fragile”. Outside of their lane, they are lost. Because they lack a true deep understanding, they may not react predictably or well to novel situations. They cannot problem-solve outside their trained capabilities. So what does this mean for self-driving cars?

The next decade will tell, but here is a likely possibility. Right now we are stuck in the driver-assist phase. It is unclear how popular this will be. Cars right now are increasingly adopting collision-avoidance mechanisms, and these are passive and helpful. But I don’t see the appeal of having the car drive itself if I have to remain vigilant and at the wheel. What’s the point? It may help if I do get distracted, but it may hurt if I get complacent. This may not even be a net advantage. Car companies will have to figure out how best to leverage the existing self-driving capabilities.

But at some point, hopefully sooner than later (but at least a decade seems to be the consensus) we may get to the fully self-driving car we were expecting. Perhaps at first this will be limited to designated roads, such as highways, but not the more variable and chaotic city or backstreets. That last measure will be the hardest nut to crack. It may require entirely new narrow AI capabilities. I personally do not think it will require general AI, but that could mean it will require us to reverse engineer how drivers handle unique situations and then figuring out a way to duplicate that process in narrow AI.

Complex technologies like this seem to have three phases. There is the early hype phase where predictions and expectations outstrip reality. This is then followed by the disappointment phase, where general faith in the technology is lost because the unrealistic hype was not realized. In this phase the naysayers will claim the technology will never happen. Then, one of several things happen. The technology may fade away or be indefinitely back-burnered if the primary challenge cannot be overcome. Sometime it just has to wait for other necessary technologies to advance.  The technology may become niche and limited, but never reach its full predicted potential. Or, researchers may quietly and finally push the technology over the finish line, and “suddenly” (decades late) we have the killer applications we always wanted (or at least the next generation does).

We are now in the post-hype disappointment phase for fully autonomous cars. It remains to be seen when and if we will get to the next phase.

No responses yet