> fallacy that makes people want to trust self-driving cars
A lot of successful L4+ autonomous vehicles today, contrary to what the press releases want you to believe, are architectured first and foremost as non-learning (i.e. traditional robotics) systems, with relatively well-defined domain-specific sub-problems carved out and delegated to learning-based methods (e.g. recognizing all cars/humans/signs/... in images captured by the vehicle's cameras). These problems tend to have well-defined metrics and massive real-world data sets backing them up, and are increasingly more common to report how confident they are in the provided results.
ADVs have come a long way despite all the doubt, and the top players are finally getting confident in removing the human from the driver's seat. This is not trivial in the post-Uber-ADV-fatal-accident world.
A lot of successful L4+ autonomous vehicles today, contrary to what the press releases want you to believe, are architectured first and foremost as non-learning (i.e. traditional robotics) systems, with relatively well-defined domain-specific sub-problems carved out and delegated to learning-based methods (e.g. recognizing all cars/humans/signs/... in images captured by the vehicle's cameras). These problems tend to have well-defined metrics and massive real-world data sets backing them up, and are increasingly more common to report how confident they are in the provided results.
ADVs have come a long way despite all the doubt, and the top players are finally getting confident in removing the human from the driver's seat. This is not trivial in the post-Uber-ADV-fatal-accident world.
Source: I work on ADVs.