> That's like the argument about how we'll never (or should never) have self driving cars.
The reason we won't ever have self-driving cars is that no matter how clever you make them, they're only any good when nothing is going wrong. They cannot anticipate, they can only react, too slowly, and often badly.
They absolutely could anticipate, and arguably with more precision than people. The common occurrence of collisions when making left turns at an intersection shows that people's ability to anticipate is fallible too: people can't even anticipate that car driving towards them will continue to do so.
Self driving cars' reaction times aren't slowed by drugs, alcohol, or a Snapchat notification pulling their attention.
Current systems haven't been proven in all weather conditions and all inclement situations (ie that tesla collision with a white semi-trailer), but it's crazy to say that self-driving cars won't match or exceed human drivers in terms of safe miles driven. Waymo has already shown an 80 to 90% reduction in crashes compared to people.
Can you clarify what you mean by unsafe? From what I can tell from the study, they're comparing to a human benchmark - basically the "average" driver, not a cherrypicked "bad" driver cohort.
Just as with wealth the average is drastically skewed by outliers. I don't recall precise numbers off the top of my head but there are plenty of people who have commuted daily for multiple decades and have never been in a collision. I myself have only ever hit inanimate objects at low speeds (the irony) and have never come anywhere near totaling a vehicle; my seatbelts and airbags have yet to actually do anything. Freight drivers regularly achieve absurd mileage figures without any notable incidents.
As I stated earlier I agree with the broader point you were trying to make. I like what they're doing. It's just important to be clear about what human skill actually looks like in this case - a multimodal distribution that's highly biased by category.
Yeah, I agree with you too. Per IIHS, the fatality rate per 100,000 people ranged from 4.9 in Massachusetts to 24.9 in Mississippi, so clearly there's a huge variance even with "US population".
The other person's comment was "we won't ever have self-driving cars" because they aren't good enough: but something like Waymo already is, particularly for the population. If we waved a wand and replaced everyone's car with a Waymo, accident rates would fall, at a population level and at a per-mile driven level.
It's even tough to see that a Waymo would be more dangerous for a good driver: they too have never been the cause of a serious accident and have certainly driven more miles across the fleet than any human driver. All 4 serious injury accidents and both fatalities were essentially "other driver at fault, hit Waymo".
This isn't meant to glaze Waymo, but point out that self-driving cars in certain environments are "solved". They're expensive, proprietary, aren't suitable for trucking or deployment to cold climates (yet?); but self-driving that is safer than people-driving is already here. To your point: human skill in driving is variable: Waymo won't replace Verstappen right now, but just like the AGI argument with LLMs, they're already "smarter" than the average person in certain domains.
The reason we won't ever have self-driving cars is that no matter how clever you make them, they're only any good when nothing is going wrong. They cannot anticipate, they can only react, too slowly, and often badly.