Just some reasoning really. Statistics and proper normalization are hard.
Tesla tends to say that autopilot crashes occur 1/2 as often as non autopilot crashes. That's likely not normalized to road conditions. But if you assume that Tesla is secretly just putting all the hard miles on the humans, then that would imply humans are driving many more hard miles and should have higher accident rates. The autopilots meanwhile must be performing worse on the easy miles and racking up additional accidents that wouldn't have otherwise happened.
If you combine those two, the overall rate of accidents should be higher than average, but it's actually lower by a fair margin. Again, normalization is hard.
Ideally you would be able to compare human drivers of another comparable car brand to the human drivers of Tesla to confirm the Tesla drivers don't seem to be being judged on unreasonably difficult conditions.
There was a sourse but I could not find it ATM. It’s fairly simple, people don’t disengage and their driving safety is judged over all the miles they drive + all the situations where Autopilot disengages.
Tesla Autopilot is judged only by the miles driven without disengagement, which is quite limited actually. You can watch Youtube videos to see at what kind of situations Tesla autopilot gives up.
There’s no situation where the Autopilot takes over from the human saying “That’s a tricky road, let me handle it”.
You seem to be missing the point though. If this were significant, then human tesla drivers should be shown as performing much worse than other car drivers, because you're claiming they have a disproportionately large riding time in "tricky roads".
A non tesla driver should be doing way better because they get to pad their score with the easy roads the autopilot supposedly gets.
How it's other car companies fault that Tesla isn't publishing data so we can check if Autopilot miles are mostly on straight lines and human miles are at tricky situations?
Tesla publishes the human accident rate, and the autopilot accident rate.
If another comparable car company published their human accident rate, and you were willing to assume that the baseline human accident rate should be the same for both brands, you could evaluate your hypothesis that Tesla is shoving the tricky situations onto humans.
As mentioned elsewhere, just because a situation is difficult for FSD to parse and process doesn't inherently make it a dangerous situation for a human driver.