Blog post

AI and the Unachievable “Fully Autonomous Vehicle” Use Case 

By Anthony J. Bradley | February 24, 2020 | 0 Comments

Recent articles like this one “30-40% Of Tesla Owners Buy Autopilot (But Full Self Driving Is Three Years Away, Expert Says)” have sparked a debate within a few of the circles I travel (pun intended) on the state of state of “fully autonomous vehicle” technology. 

I thought it very interesting that the participants in the discussion seemed to self organize around those who were pessimistic and those who were optimistic. I saw a natural tendency for pessimists to combat the hype with an overly negative view and the optimists to combat negativity with a rosy perspective. 

The Logical Fallacy of Fully Autonomous

While some participants took the position that fully autonomous vehicles will never materialize (at least not in the coming decades). Others took the position that fully autonomous vehicles are already here. Both of these positions have merit. It all comes down to how you define “fully autonomous.“ 

If you define “fully autonomous“ as a vehicle that can operate completely independent of a human driver, then fully autonomous vehicles do already exist. 

If you define a fully autonomous vehicle as a passenger car that can handle all aspects of driving in any driving condition without the need for human involvement, then fully autonomous is probably a very long way away and depending on how extreme you define “any driving condition“ it may never exist. 

I lean more towards the former definition rather than the latter. Why, because every other technology is use case specific so why would we hold artificial intelligence and autonomous vehicles to a different standard. If you layer on the qualification of “all aspects under any condition“ then you are setting an unrealistic bar for any technology. 

Does a technology need to satisfy every use case possible in order to provide significant social and business value? Of course not. We can gain tremendous value from a technology that serves even a small fraction of its use case possibilities. So does it really matter if we achieve “fully autonomous” to the standard of the second definition. I say no, not really.

The Better than a Human Dilemma

Another interesting aspect that emerged from the discussion was the “better than a human“ criteria. 

I am ambivalent about the application of this criteria. On one hand I think it is very valuable when we apply it to specific use cases. Applying it to specific use cases can help us understand the appropriate roles of human intelligence and machine intelligence as we address the tough challenges.  And my position has always been (and continues to be) that a focus on machines replacing humans is a distraction from the real value of artificial intelligence. The real value comes from applying machine intelligence where it can outperform human intelligence and leveraging human intelligence when it outperforms machine intelligence. 

However, if we misapply the “better than humans“ criteria to broad challenges such as operating a vehicle in any driving condition, treating patients, etc., we discount both the value of machine intelligence (by holding it to an unnecessarily high bar) and the importance of human intelligence in tackling the big challenges.

It All Comes down to Use Cases 

In case you haven’t noticed, the point of my post is that we should avoid looking at artificial intelligence capabilities (such as autonomous vehicle tech) in the broad sense and instead ground our realistic evaluation of artificial intelligence in the real world use cases where it excels. This forces us to move away from the generalities that are very difficult to truly evaluate and concentrate on articulating the value, or lack thereof, of specific use cases.

Then again, lots of intelligent humans love the high level philosophical debates without the hindrance of deeper evaluation. I doubt machine intelligence will replace that capability any time soon 🙂  

If you haven’t noticed the wordplay in my title for this post I’ll state it outright. “Fully Autonomous” is not really a use case and therefore it is not achievable as a use case.

By the way, we will be discussing the state of AI/ML and other emerging technologies at our upcoming Gartner 2020 Tech Growth and Innovation Conferences. The premier conference for technology & service providers.

May 11 – 13 | San Diego, CA 

May 18 – 19 | London, UK

Learn More or Register today!

Leave a Comment