Underlying study is here. [1] Full disclosure, I co-wrote the piece for The Daily Beast [2] that originally suggested that Tesla's methodology was seriously flawed. I am not a Tesla-hater though — I just thought it was odd that the company was playing quite so fast and loose with their safety claims.
- Freeways account for 93% of distance travelled with Autopilot
- Freeways account for 28% of the distance travelled by regular drivers.
- "vehicles on non-freeways crashed 2.01 times more often per mile"
a quick back of the envelop calculation suggests merely driving on the freeway lowers accidents per mile by ~40%, which accounts for most of the difference between Autopilot and regular drivers.
Again the old tricks, from "How To Lie With Statistics."
I get you a perfect Sample around age, income, sex, nationality, times of day and others. Prove to you 98% of people in this Town love Romantic Comedies.
The only thing I am not telling you, is that I am running the Inquiry at the exit atrium door of a Romantic Comedy movie...
Teslas are on average substantially newer than the average US passenger car fleet and thus have significantly more safety features and driver assistance tech, are better maintained, etc.
If Tesla compared only to vehicles of similar age and class (Audi/BMW/Mercedes etc) their stats would be much less favorable.
Recent statistics released by TüV^1 show that among relatively new cars Tesla competes with Dacia for the highest failure rate among all cars that go through Germanies mandatory road worthiness tests.
Go find the HN discussion on this report. German processes and bureaucracy surrounding road worthiness checks is not favourable towards non-established car brands like Tesla.
In short: big car brands' cars go through a preliminary check at manufacturers' licensed dealerships, and only after that get sent to the government's checks. It enables manufacturers to fix cars before independent verification so their safety numbers look great. Tesla doesn't have this kind of extensive network for "pre-checks".
You're explaining (one of) the reasons Tesla is failing at the metric. Far from showing the metric is flawed, you're exposing the value of a dealer system and what Tesla's customers are
losing because there isn't one.
The value is as an additional line of QC. If you think about it as "hiding statistics" then there is no value. If you think of it as the last step in the production process, then clearly the cars making it to the customers are of a higher average level if the bad ones get rejected.
It also lets Tesla play with the numbers of shipped cars, because cars returned by dealers don't count as shipped.
But look at the recent HN story about the Tesla without a brake pad.
I think you may be confused as to the context. Or else I am, let me check.
Ok, so this is a check done after the car has been used for 3 years.
So, it's not QC before the car leaves the factory, it's fixing things before the government checks them and puts them into the public stats for test "failures".
I may have misunderstood. These statistics are generated by the government checking vehicles after they've been in service for 3 years by an end consumer? And only after the dealer gives it an immediately preceding check? If so, that invalidates my point.
If it's just the results of regular dealer maintenance, it's fuzzier.
> And only after the dealer gives it an immediately preceding check?
Some dealers are qualified to perform the checks and also offer to perform a pre check for free. However cars checked by a dealer would not end up on a statistic of cars checked by TüV.
If your mom warned you a day in advance that she is going to see how lean your room is, is she going to get a true understanding of your cleanliness, or just a snapshot of what your room is like?
Also, I think the biggest flaw in tesla’s statistics is that people will turn of autopilot when they see a dangerous situation ahead - it is pretty much as self-selecting “good” samples as it gets.
indeed, either they turn off autopilot to avoid the crash, or it crashed. The cases 'would have crashed if the human did not intervene' should also count if you want to claim it is super safe
It does? Those drivers remove a small portion of the autopilot miles driven, but a large portion of the challenging/dangerous miles. The result are stats with a low accident per mile ratio.
It definitely skews the statistics. Compare two driver's records, but use all the driving of the first, and only the driving on safe/clear/clean/visible sections of highway for the second. Voila: the second will very probably be the "better" driver.
People often save up especially for a Tesla (incorporating gas and maintenance savings in the price), including a lot of people who wouldn't be buying a BMW, so I'm not sure price would be the right metric there.
No, I’m saying that people incorporate a $10k price of gasoline into the price of the vehicle instead of paying for them separately as you would with a BMW.
Parent does not include price as a metric, but cars of the same age and class, which probably have many of the same default extra safety features. Older cars might not even have ESC, they certainly don't have cameras that help you detect cars in blind angles and so on.
“Class” here refers to pricing, does it not? Luxury car owners will be quick to point out that the Model 3 and Y are not “luxury” class even if the cost ranges aren’t far apart.
From what I heard the autopilot code is pretty old by now, with the new developpement on FSD I think we will see a lot of improvement, in human driving... not so much.
There are additional confounding variables noted in the paper:
- Age of driver
- Specific highways used (congestion, road conditions)
- What is classified as a highway
Overall this is pretty good news for "autopilot" systems. It gives some humans an ego-boost as they clutch to steering wheels like John Henry. It gives others interested in a more relaxed driving experience the confidence that the computer isn't going to kill them any worse than they already do.
And you know what? All I want out of the first fifteen years of self-driving is a really good highway autopilot.
As in one I can hopefully sleep at the wheel on a late night trip between midwestern US cities.
John Henryism can navigate the suburbs, stroads, and drivethrus. If they achieve a backup safety system that monitors the humans, well, ok great.
But right now, I want long distance automation, and I can handle the messy last mile. Highways are much easier to get uniform infrastructure convergence: some sensors, PSA warnings, better signs + markings + construction notices, etc.
I think Uber making as big a splash at it did sidetracked this low hanging fruit. IMO that was where the bootstrap bucks were, because you would have gotten freight/shipping to pay the cost of the next gen software, and would have provided the impetus for the convergent infrastructure.
Trains don't operate from door to door. The last mile problem persist, unless you travel between railway connected city centres. Not everyone lives anywhere near a railway station.
(I live in Europe which has relatively good railway network, still I prefer my Tesla as transport method.)
True, they also run such a trains here in Finland, an overnight trip from south to north (Lapland). The train tickets for our family of 5 plus a car are so expensive (total 1200 eur) compared to minimal marginal cost for using our EV (150 eur), that we always choose the car.
They were suggesting they want to use the same vehicle door to door with it being automated during the freeway bit. A train would still mean find some transport to the station, wait for the train, ride train, find transport from station.
I'll answer that with another piece of prognostication: good highway self-driving will cause massive disruption to the airline industry.
Would you rather do this for a business trip to a 400-500 mile away city:
- fixed time to depart (and fixed for leaving your destination!)
- have to book ticket weeks in advance for a reasonable flight
- no flexiblity if plans change
- pack a small bag
- transit to an airport (parking or cab or whatever)
- arrive a sufficient amount of time in advance
- check in
- go through security
- get on plane, stuck in seat, no/crappy food
- get off plane, go to car rental place
- rent car (not yours, sucks)
- cost: 100-200$, and if you want to bring anyone it's a multiple of that.
- no cell phone, restricted to plane wifi if they have it
- door-to-door is what, 4 hours? Man I hope there isn't a delay....
Or, with self-driving:
- pack your car or minivan to the brim with stuff you need
- leave when you want
- sleep / stop to eat / stop to bathroom when you need it
- full cell coverage / hotspot
- take additional people
- EV cost is like... 20-40$ for the trip if you include wear and tear?
- door to door is... 5-6 hours
If the car is driving for you, it is far less stressful, flexible, much cheaper, especially if you want to take your family.
Trains are a ... bit ... better than airplanes, but still inflexible, surprisingly expensive, won't have your car at the other end, have to haul around your luggage rather than chuck it into YOUR CAR.
When good highway self-driving comes, I will probably never fly anything less than 1000 miles. If I get a self-driving RV that's EV self-recharging at charge stations? Hell I might get rid of my apartment.
Both trains and planes centralise assets, services and pathways, allowing for rent seeking and anti competitive behaviour, government and regulatory capture, and by their very nature are anti consumer and pro big business. Hence their dominance and the increasing pressure on self driving EVs.
You are saying this in a thread about a company that regularly sells features as a subscription and revokes them upon resell, as well as voids warranty if you get it repaired anywhere that's not a Tesla shop. Tesla have entrenched themselves as rent seekers from the beginning.
But cars need infrastructure (roads) which tend to create the same exact thing (big construction corps getting all the cake, thanks to political connections). The scale is for sure smaller but still...
> But right now, I want long distance automation, and I can handle the messy last mile.
Me too, and my Tesla handles daytime good-weather non-construction highway driving pretty well with just the standard autosteer. I finish a long road trip much less fatigued in a Tesla than I have in any other car. (Part of that is the seats. Tesla really sweats the details of making seats comfortable.)
But FSD is kind of a sad joke. Off-highway navigation only works in particular major cities, the car stops at green lights, tries to turn right from the left lane, etc. And Elon thinks that's worth an extra $15k. Nope. It really isn't, at least for me.
Obviously the article goes into a lot more depth, though in my opinion things could be a bit clearer if they first established if Autopilot performs any differently on freeways or not and then try to see if controlling for any of the other possible confounds has any effect. I'm also not overly fond of their reweighting method, if not done carefully you'll just be amplifying a lot of noise.
> An estimated 15% of injury crashes and 24% of property damage-only crashes are never reported to police (M. Davis and Company, Inc., 2015), while9% of injury crashes and 24% of property damage-only crashes are reported but not logged (Blincoe et al., 2015). Even with robust data, establishing the statistical significance of automated vehicle safety can be expensive. Kalra and Paddock (2016)demonstrated that establishing that an AV has a fatal crash rate equivalent to the national average with 95% confidence would require driving a fleet of 100 vehicles continuously for 12.5 years.
I presume there's an adjustment for this in the figures you compare to, but I am having trouble finding it, though that may be my own failure so I wanted to ask if you could help me find how that's adjusted for?
Kalra and Paddock's analysis is correct as far as it goes, but it's easy to misuse to claim that we couldn't feasibly demonstrate an autonomous vehicle is acceptably safe.
It never got accepted for publication because the reviewers didn't think it was significant enough, but I wrote a paper presenting an alternative way to analyse autonomous vehicle reliability data (you'll have to take my word for it, but the reviewers didn't quibble about the results presented, just whether it was important):
The major technical contribution was demonstrating that a class of mathematical models called software reliability growth models are a reasonable way to estimate the "anomalous behaviour" rates of an AV system as bugs are found and fixed.
My second point was that using fatal accidents as a way to measure safety is not necessary; you can use more common events such as non-fatal accidents, or in the case of AV systems in development, even the number of occasions the safety driver overrode the automatic control.
I am surprised a scholar would take these reports without a massive grain of salt.
> In this study, all disengagement events listed in the company reports were included in the analysis. It is likely that in some cases, disengagements occurred as a precaution by the safety driver, and no adverse event would have resulted
if the AV continued in autonomous mode.
Take a look at [1]. It says that:
> Waymo has developed a robust process to collect, analyze and evaluate disengages for this report. We set disengagement thresholds conservatively for our public road testing. The vast majority of disengagements are not related to safety. Our test drivers routinely transition into and out of autonomous mode many times throughout the day, and the self-driving vehicle’s computer hands over control to the driver in many situations that do not involve a failure of the autonomous technology and do not require an immediate takeover of control by the driver.
> To help evaluate the safety significance of disengagements, Waymo employs a powerful simulator program. In Waymo’s simulation, our team can “replay” each incident and predict the behavior of our self-driving car if the driver had not taken control of it, as well as the behavior and positions of other road users in the vicinity (such as pedestrians, cyclists, and other vehicles). Our engineers use this data to refine and improve the software to ensure the self-driving car performs safely.
So effectively, Waymo has secret process, which may change all the time, to decide which disengagements to report and which not to report. There is no way to tell whether the car would have slightly bumped into another car, or killed a child if not disengaged.
Their reports to the DMV do not constitute data, they are PR.
If I had a single-car crash with minor injuries, why would I report it? It seems like there’s limited upside to me as the (obviously at-fault) driver. Go get medical care and go get my car fixed. (I drive cars cheap enough to replace and so do not insure them against collisions that are my fault.)
Drivers driving without a valid license, without insurance, under the influence, wanted by the law, etc would have all kinds of reasons to not report a crash.
Police responding to the actual crash scene, useful. Police report filed after the fact, only if there's some need for said report. And not all injuries are immediately apparent. 3 1/2 years ago I got a first-person view of what it would be like to be on the receiving end of a PIT maneuver courtesy of a woman who didn't look adequately. (Yes, when you looked left the road was empty--I saw that also and turned into that empty spot the block before you!) At the time the only injury I was aware of was a very minor scrape, not even worthy of a bandaid. Later I discovered a pulled muscle--it only hurt when I did certain things.
I also used to know a woman who thought it was nothing more than a fender-bender, went to the doc a couple of days later because her neck was bothering her. Doc sent her straight to the hospital--incomplete cervical fracture, one wrong move and she could have dropped dead.
After college I lived with roommates for a while. One went on a joyride of sorts and damaged a bunch of parking meters, crashed and injured themselves. They had a friend drive them home and repaired the vehicle with no reporting.
In some cities currently even if you want to have police respond to a traffic accident, you'd need to wait a VERY long time - they will tell you that unless you need EMS to try and get a safe spot and call for a tow. Some folks aren't willing to wait hours for police to show up to do what, take a report?
Depending on the state, though, reporting all injury crashes and any property damage over a certain (fairly low) amount is required by law. Not necessarily to the police, but to the DMV. I assume that statistically that would count in the 'reported to police' column.
I got hit a few weeks ago while walking my dog (dog was fine). I thought I had just bruised my tailbone so I waved the driver on after a short chat and never reported it. In hindsight I wish I had as it seems like my tailbone was fractured and not just bruised but it definitely happens.
So it sounds like it's Simpson's paradox [1] at work. I had always wondered why many people are slow to accept self-driving cars despite the claims of them being statistically safer. I think that explains it --- to make consumers confident about the self-driving technologies, it is not enough for them to be able to handle the "easy" kind of driving (e.g. on highway and under relatively good visibility conditions) better than humans; they need to demonstrate that they can drive better than humans in the most challenging driving environments too.
I guess this is the classic scenario where human intuition is defeated by carefully presented statistics.
The thing is, you don't even need Tesla "self driving" for that.
Adaptive cruise control, tech we've had for what, 20+ years? is more than enough for most highways, it takes out ~80% of the stress/boredom of driving, especially since lane changes are minimal and curves are gentle.
The hard parts of driving (aka not highways) are the real problem, as expected. We solved the easy parts in 2000 and who knows if we'll solve the hard parts by 2050.
I have a car with adaptive cruise control and lane assist, and most of my morning commute (on days when I go into the office) is driven on the highway. Most of the time I don't turn it on because I'm in a self-perceived hurry to get to work, so I forget.
You know what would be really nice? Something that detects when it's safe and reasonable to suggest turning on adaptive cruise control so I use it more. Maybe something like detecting I've driven a few miles at a consistent rate of speed above a certain threshold.
The hard parts are definitely the problem, but there are lots of easy wins left on the table too.
I would personally hate for that to be on but I can see your point. Maybe a simple hardware switch for on / off and adaptive cruise control turns on once you are over 65mph for more than 1 minute and will only adapt +-15mph total?
GP doesn't suggest having the car turn on cruise control on its own (nightmare!), but instead to remind the driver that this may be a good opportunity to switch it on themselves.
My 2015 Tesla S 70D (AP1.5) does just this for Autosteer by adding an icon in the instrument cluster to say that the car thinks it could do it. I wish it had an optional chime for it though as I really don't want to spend much time looking at the instrument cluster.
i am thankful to turn autopilot on as soon as i reach a highway entry ramp. Even if i'm in no rush and fully capable, i want the assistance merging onto the highway. I don't want to constantly look over my shoulder for a window to get into the fast lane, I want one selected when it becomes available.
The hard parts are the whole point of autonomy.. the parts so hard that humans still fuck up and kill people. that's the ideal of autonomy imo and short of that its missing the point.
In the interest of openness and accountability you need a transaction to turn the engine on, or to activate the brakes. They say that it takes a few milliseconds but never really tested it so who knows. Gives a new meaning to “gas fees”.
I think traditional auto makers have done a much better job of communicating to consumers what the tech can actually do. We don’t have self driving cars (which implies that they can drive in those harder conditions). We have cars that are good at lane keeping on freeways.
I wish Tesla would celebrate that victory rather than double down on lies.
> it is not enough for them to be able to handle the "easy" kind of driving (e.g. on highway and under relatively good visibility conditions) better than humans
Is there any evidence that they actually do the easy stuff better than humans? I would want to see it compared with cars of similar age and cost driving on similar roads with similar aged people in control of the vehicle. I'm pretty skeptical that such a comparison would in fact be favourable to Tesla.
robots are better at reaction times, spatial awareness, impulse control. There are obvious advantages that are capable of making up for the short-comings of the NN implementation. Essentially, a lot of hard decision making is avoidable with proper awareness and caution.
I don't know what else to advise people who want evidence. The cars are real things you can experience...
People are slow to "accept" self-driving cars because they don't exist in a form that anyone can actually buy. Regardless of safety or lack thereof, I can't purchase a real SAE level 4+ vehicle today at any price.
"Simpson's paradox, which also goes by several other names, is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. "
In this case, the effect when the data is separated out is that AP is only marginally safer. When they are combined, it appears to be significantly safer.
I'd say this isn't a case of simpsons paradox at all.
The groups aren’t segmented by freeway vs. non-freeway use, but it’s an important omitted lurking variable. After conditioning on the variable, the takeaway changes.
Tesla roll out cars in generations. So just take a $40k Model 3/Y platform was introduced in 2017 and compare it with any car been introduced that year.
My friend bought a BMW in 2019 for similar money, it doesn't have half of that. Maybe you can have them but you have to buy extra packages for extra money.
Tesla don't sell safety features for extra price. The only extra feature is Autopilot. But hardware, camera's , lane departure avoidance, blind spot collision warning etc in every car.
You can even take such a simple feature as forward collision warning system. When you drive it is super accurate it uses the same data and models from Autopilot to predict when there is a crash. If your slowing down dynamic is not good or the car in front/back change it dynamics it will warn you immediately. Only this feature saved me several times from collision. I'm not a good driver.
As I said in 2022 there are many car brands that catching up in some cases they doing better. And this is good.
Human is a bad, constantly distracted driver. Technology is the only way to make our life safer.
Automatic emergency braking is a required standard feature in the US.
Forward collision warning became standard in 2016 by at least ten manufacturers: Audi, BMW, Ford, General Motors, Mazda, Mercedes-Benz, Tesla, Toyota, Volkswagen, and Volvo.
Without speaking to Blind Spot, my Audi does similar intelligent things. If you change lanes next to a vehicle or just in front of it, but you have a positive speed differential, it won't activate. If there is a vehicle in your blind spot approaching you but their speed differential to you is low, it will activate later. If there's a much higher speed differential (say they're in a HOV lane, and you're in a much slower regular lane), it will activate much earlier.
It's not about how to calculate it. It' about how can you understand what's around. And Audi is not super good about it. Tesla is far better at understanding the context. This year they will switch to FSD stack.
The car will not drive me itself. But security features while driver engaged will have a huge boost. And that will be available for any car that was released from 2015-17 with some minor exceptions. That's how security should be handled.
And yes for these minor exceptions Tesla offers a retrofit.
Based on what? "Understanding what is around", and "Tesla is far better at understanding the context" - my Audi has never attempted to drive into stopped traffic.
Your vision (and not just about Audi) that all other manufacturers have these dumb, simple systems and no context of what's going on around is simply, and factually, wrong.
My Audi (and I'm not saying they are the leader, I'm just speaking on them because of my experience) has no fewer than 13 radar sensors around the vehicle and will tell me of incursions into the vehicle's "space". It will recognize that some incursions are more "concerning" than others and alert appropriately.
I really have no idea what you mean by "Audi is not super good" about safety systems.
I drive a 2021 Audi A1, the most basic version. From Tesla's safety page, this Audi has:
- Impact protection
- Structural integrity
- Crumple zones
- Lane departure warning/prevention
- Collision detection
- Emergency braking
The only thing it doesn't have is the blind spot sensor. But, again, it's on an entry-level car that costs easily 10 thousand dollars less than the cheapest Tesla.
Ah yeah, and all that works in Swedish conditions where Tesla's would be blind for a significant amount of time (the sun hanging just above the horizon for a prolonged time, an reflecting from a snowy/wet road).
But sure. As maxdo would have us believe, "it's hard to match what Tesla offers".
You still haven’t provided any evidence that suggests Tesla’s have (1) more impactful safety features than other cars in the same class (2) that Tesla’s are safer in general.
Also, if your saying FSD is going finally, finally be properly released this year — I doubt it.
Your strong pro-Tesla bias and your sweeping, unfocused and unsubstantiated allegations against other car makers undermine your argument completely.
It is standard on the Cherokee and Compass. On the Wrangler it is part of the Advanced Safety Group with adaptive cruise control. I think it might have to do with false positives when off-road.
Having driven Audis for years, including an A4, S4, and a recent model A4 (and other models as rentals in Germany), I can tell you definitively that Audi _pales_ in comparison to my Telsa Model Y in terms of safety. It's not even remotely comparable. Until you've driven one on a regular basis, it's hard to explain. They are light years ahead of traditional automakers, to the point of embarrassment.
Suffice it to say, I don't foresee myself ever buying or leasing a BMW or Audi again. Telsa is so far ahead that it probably won't compute unless you've driven one for a while.
>> My friend bought a BMW in 2019 for similar money, it doesn't have half of that.
That's an incredibly bad example, because in German brands you have to pay extra for basic safety features(famously in the new Polo you have to pay extra to have more than the basic 3 airbags, I don't even know how that's allowed). Look at a brand new Peugeot/Citroen/Kia/Hyundai and you will pay less than what a Model 3 costs and get just as many safety features. Yes, all the stuff that is optional in a BMW - but look outside of Audi/Merc/BMW/Volkswagen and you will find that this stuff is also completely standard.
You just excluded 60% of luxury market. Peugeot/Citroen almost not exists in US.
Kia and Hundai they will be the biggest competitors among legacy auto brands. Even though their tech is not ambitious, basically copy paste from best and make it a bit cheaper, but not very cheap :)
People shopping in the luxury market can afford options they want. Did you know that any paint other than basic black and basic white is extra cost on BMW? If your argument about luxury cars not having safety options because they cost extra money had been valid then the argument that there are only black and white BMWs also had been a sensible one.
> My friend bought a BMW in 2019 for similar money, it doesn't have half of that. Maybe you can have them but you have to buy extra packages for extra money.
#1 lesson of buying cars is:
1. Don't buy German cars if you don't have to (or if you buy them, never complain about the price of their optional equipment).
Usually there is this thing called transmission in a car that effectively make the car able to accelerate enough for most of the situations. The only problem with having less horsepower than other cars is that the maximal achievable speed is lower. But for sure riding above 100mph is much more comfortable in other brands.
Low horsepower vehicles feel downright dangerous. Especially in the third world where executing moves quickly is very important. I suppose there is a way of driving that takes this into account but it feels so unnatural to me.
>Facts: Tesla was the first to introduce such wide range of safety features in every car they sold. They don't make safety a premium feature. They invest in own crash test facilities. Only Volvo is kind of on par.
Volvo built their crash test facilities before Tesla came to existence.
> Tesla was first to introduce over the air updates of software and firmware.
I want the controls on my car to constantly change out from under me as much as I want my miter saw and power drill to startle me with a new interface. Screwing with safety critical controls does NOT make people safer.
It wasn't much of an accomplishment for Tesla to introduce safety features across its lineup, when its lineup consisted of luxury cars.
Today, other car manufacturers are standardizing safety features across their model lineups while manufacturing cars that are a third the price of the cheapest Tesla.
BMW is a luxury Car, you still have to pay for safety extra. That's the difference. One car brand make it mandatory, other ask you extra dollars for every safety camera to install.
The disgusting fumes are almost exclusively from diesel trucks and very old cars. Manufacturing affordable (no, the Model 3 is not affordable) new cars helps take old cars off the streets and make our neighborhoods cleaner.
You are never going to convince me that a car emitting gray or black smoke out the back is somehow not any worse than one that has no emissions at all.
> Autopilot is a tool. You can miss use it, sure. But if you care about safety it's hard to match with what they offer.
1. It's not autopilot, by any measurable criteria
2. As the adjusted data clearl shows, it's not that hard to match it. And since you mention Volvo, they definitely match it. They are honest enough though not to call it "autopilot", or "full safe driving" or pin the plame for its failures on the driver.
Yup. Autopilot on aircraft is very different because pilots don't have to be ready to take over on a moment's notice.
Of course they have to be monitoring the flight but they can be generally assured that the autopilot won't decide to just fly into a mountain with only fractions of a second for the pilots to intervene.
There are moments a pilot has seconds to react and disengage autopilot.
Example: Traffic Resolution advisories require a 5 second reaction time and 2.5 seconds for any follow-on reactions. This reaction time includes the time to disengage autopilot and also put the aircraft into a climb-or-dive. Studies show that a situationally aware pilot takes at least 2 seconds to start the reaction, and another second to achieve the proper climb/descent. There are other times on e.g. approach, or ground collision warnings with even narrower margins for disengaging autopilot.
Pilots generally know when they need to be hyper-aware with autopilot and so do competent Tesla drivers. If there are issues with the name "autopilot" for Tesla, the same argument needs to be made for aircraft manufacturers.
"Example: Traffic Resolution advisories require a 5 second reaction time"
When the Autopilot swerves into a concrete wall for no reason, you will have 0.5 seconds left to live. Rwacting in the last 0.1 secons bwf9re collision will not save you - you must react in like 200ms.
It is physically impossible to react in time to some autopilot mistakes, stop defending this idiocy.
Surely you can see how you might have less than a second to react if Autopilot swerves into the oncoming traffik? I can't search all tesla accidents by 'least time to react'
> If there are issues with the name "autopilot" for Tesla, the same argument needs to be made for aircraft manufacturers.
Pilots require extensive training (an re-training). The must meet certain criteria to even fly certain types of plains. And they have to go through additional training if they have to switch planes (going from Airbus to Boeing for example).
So, if we're making "teh same argument" for Tesla, then Tesla needs all that to call its system an autopilot.
5 seconds (or the 2 seconds that pilots require) is a lot more time than you’re afforded to take action while driving a Tesla in autopilot. And when driving the Tesla you often won’t get a clear warning signaling that you have to take over and do something - it can fail silently and swerve straight into an obstacle.
The difference is that you can’t be a pilot with even as little physical impairment as a crooked nose and it takes extensive years to get a license, while every rich 18 years old can sleep behind a tesla’s wheel…
A sailboat autopilot is even simpler. The simplest ones maintain compass direction, but don’t compensate for drift due to water currents or side winds, or for the difference between magnetic and true north.
An IMO important difference with pilots is that pilots are professionals that get trained and periodically retrained on how specific planes and their autopilots work.
Tesla drivers, on the other hand, hear Tesla’s at times optimistic marketing, and may just learn how to switch autopilot on and off without understanding when to use it and when not to use it.
And that is why the whole 737 Max fleet got grounded and none of those planes were allowed to be operational until Boeing put in place a fix for the problem.
And most airlines ended up cancelling pending orders for the 737 Max and switched to a different brand that didn't play nearly as fast and loose with safety.
It does seem quite comparable in capabilities to the actual autopilot in aircraft. And that’s what’s confusing to people; most don’t realize that an autopilot is a control used by a driver or pilot to reduce their workload. (As a pilot, I am still logging flight time as the “sole manipulator of controls” during the time the autopilot is engaged.)
And that "confusion" was created intentionally by Tesla. So that it is easy to pretend it was not deliberately pretending higher capabilities while giving yourself option to pretend you are not.
You know like that claim about human being there just because of pesky regulations and laws. Not because human would be needed.
>It does seem quite comparable in capabilities to the actual autopilot in aircraft.
including the system's assumption that the operator of the vehicle is doing so as a trained professional in that skill, including the extensive continuous training requirements required to maintain professional-level expertise?
The author is projecting his assumptions based on 400 people questioned. Is it fair.
Most Cars volvo produced is not as safe simply because Volvo is still making most of the money on Gasoline car, and they are less safe vs electric. Do we need to create a screaming article "Vovlo is deliberately making money on old tech that kills people"? Again volvo is a premium brand, so they don't want to invest more in EV to make your car more safe. They also promote hybrid Cars that has highest chances to catch on fire compares to EV and regular ICE cars.
But yeah, naming of an optional feature is so much more important compares to real tech, crash tests etc. That's a logic media is trying to put in our head. Don't think.
> Most Cars volvo produced is not as safe simply because Volvo is still making most of the money on Gasoline car
That's... quite a leap.
> Do we need to create a screaming article "Vovlo is deliberately making money on old tech that kills people"?
That would be just that: a screaming article.
> so they don't want to invest more in EV to make your car more safe.
Ah, yes. Because the only way to make a car safe is to make it electric.
1. I was responding to the ridiculous claim of "if you care about safety it's hard to match with what Tesla offers".
This claim is bullshit, and Volvo easily matches what Tesla offers when it comes to car safety
2. Volvo XC40 Recharge, C40 Recharge are the cars that exist, but you chose to ignore them
> Naming of an optional feature is so much more important compares to real tech, crash tests etc.
I tried to parse this sentense, and I couldn't. Are you saying that Volvo doesn't do real tech, or doesn't do crash tests etc.? If so, are you delusional?
Volvo's plans all in the future, it's just a commitment. today is there in 2025, tomorrow web designer put another beautiful article with 2030.
Gasoline cars is less safe to the same car with the same set of airbags just because of It's physics e.g. lower center of gravity, no engine in the front that can kill you during forward collision, battery at the bottom serve as a protection layer during other type of crashes.
In fact tesla is about to release structural batteries cars, that will use battery as part of the cars body, makes it even more safe. Model Y with structural pack will be out in March 2022.
Volvo at this moment promoting hybrids and ICE that doesn't have this features, because it's bloody business. And money is money.
Just compare real actions of Volvo who claim one of the safest brand and Tesla.
Did you even read the article I linked? I have friends that have the XC40 Recharge...
> In fact tesla is about to release structural batteries cars, that will use battery as part of the cars body, makes it even more safe. Model Y with structural pack will be out in March 2022.
I'll believe that when I see it. How's the Tesla Semi release going? Especially since structural batteries are still an active research topic that actual researchers think will be in mass production in 5 years or more.
> battery at the bottom serve as a protection layer during other type of crashes.
And it can nicely burst into flames that burn for hours on, without any easy way to put it out. I sure as hell would rather have a broken arm over being burnt alive in a steel frame.
> But because other brands release millions of boring cars with 0 innovation they are not criticized.
That's not actually why.
It's because carmakers are huge advertisers. You get on their bad side, they pull their advertising.
Tesla's advertising consists of Elon Musk's Twitter account, so Tesla gets the reporting the others would get if the authors weren't reliant on the subjects of those stories for their paychecks.
It covers 5.4 million trips in 3,400 vehicles with digitally collected records and vehicle data and has been independently validated by the Virginia Tech Transportation Institute and Department of Transportation.
The 93% highway driving figure, which is used to normalize Tesla numbers, and then compare agains the NHS dataset, is coming from that 28 Tesla cars study from 2018, or did I read that wrong?
Even after adjusting, it does look like autopilot is slightly safer (although not a strong enough effect to say for sure). It doesn't appear to be worse, so good enough to allow it, IMHO.
I actually wonder if there's another effect: Drivers only eligible for beta FSD if they meet a super high safety score (which isn't a perfect metric, but probably better than nothing). That incentivizes good driving even when FSD not engaged. Of course, that's a recent thing and hasn't shown up in data.
Every car company does marketing. Few make over-the-top safety claims about their products. This is the company that decided to have its cars not stop at stop signs. Normal rules don't apply to Tesla.
"In recall documents, the electric vehicle maker notes that if certain conditions are met, the "rolling stop" feature is designed to allow the vehicle to travel through all-way-stop intersections at up to 5.6mph without coming to a complete stop."
I mean... I am "Tesla Hater" in that I don't like the direction they're proceeding in; but every human being I ever met in real actual life does rolling stops under some conditions. Last time I can confidently state that "I never rolled through a stop at very slow speed" is when I was 17, and only NOW do I realize how much of an annoying bum I was :P. So if a car does a rolling stop through a stop sign when there are no other cars obstacles or people, my thought is 1. yes that's a breach of law as written 2. It's what everybody else safely does.
Kind of like in many jurisdictions, police will gently talk to you if you drive 60kph on the freeway, or even if you're doing 90kph in the left lane. Yes it's legal, no it isn't safe.
It is one thing to note that humans regularly break the law. It is another thing to task people with programing a robot to break that law. I'm very sure that many people break the speed limit regularly. But I would never expect to get away with programing a car to break that limit. No sane person would ever tell an employee or customer that rolling stops are OK, not in writing.
And 5.6mph isn't rolling. That's beyond a walking pace. That's jogging territory. Any cop watching an intersection would ticket this.
I fully support the noble notions and idealistic ideas.
But let's make it real, and let's ask a question about the real world:
Does that mean that expectation is all automated cars will go exactly the speed limit on the left lane of North American superhighways?
If so, those cars will inherently be a danger to life and limb. I don't care about any self-righteous driver who indicates houghtily "I drive speed limit on left lane of American super highway", I've spoken to advanced driver safety instructors, highway police officers and city councilors who all agree those people need to bloody well move it along - it's just not safe.
> Does that mean that expectation is all automated cars will go exactly the speed limit on the left lane of North American superhighways?
It’s already against road rules to stay in the passing lane when cars behind wish to pass, regardless of the speed limit, or if you’re already driving at it. If cars behind desire to overtake, give way. This is already codified. No need to blame the cars or the self-driving tech. It’s the human driver who bears responsibility for what the car does or doesn’t do, as are the only ones able to countermand the autopilot. Blaming Tesla for any of that seems like dogpiling and behind the point.
That depends on the jurisdiction. In my state (MA), I can use the left lane while passing other traffic even if there is traffic behind me who wishes to go faster than I am.
I don’t think that is contrary to my point or what I was trying to say. The left lane is for passing cars in the right lane, and for turning left across traffic in cases with no middle turning lane. The left lane is not meant for driving. I don’t think this varies by jurisdiction and may not be a regulation per se, it’s just the norms and rules of the road.
Yes. For a company to program illegality into a product opens them up to untold liability. Normal car companies program their speedometers to read slightly high (5% ish). This keeps them from being into lawsuits by people who claim that an inaccurate speedometer contributed to the severity of a crash. Any car company who programs a car to drive faster than the limit will be either liable or have to pay lawyers in every crash involving such cars. Even if their car doesn't cause the crash, the fact that it was speeding will contribute to the severity of the accident. It would be like programing a robotic bartender to serve kids who are underage but look old enough to pass for 21. Corporate lunacy.
>Yes. For a company to program illegality into a product opens them up to untold liability.
People have been saying this for years yet Tesla is not being sued into nonexistence. Presumably their contracts are written well enough that if the user tells the car to go above the speed limit that's on them.
Because incidents involving these programs are very small. At the moment there are very few Teslas on the road in comparison to other car companies. There would statistically only be a handful at most of such accidents. But give it a few years until there are 10x or 20x as many teslas on the road. Then the class actions will start. That is if, like here, Tesla hasn't already recalled all such vehicles.
There's a difference between the driver setting the cruise control above the speed limit, and a computer unilaterally deciding that it should run a stop sign.
Most cars idle at 5 to 6 mph, so any roll through a stop sign is likely to break the 5 mph mark.
> Any cop watching an intersection would ticket this.
Not sure where you live, but this is absolutely not the case where I am. I have only ever once heard of a traffic officer ticketing for this. On the other hand, I see cops watch on as people ignore "no turn on red" signs and let them get away with it pretty much every day. Let alone rolling through an intersection at 5 mph.
Lol, I live in SF. I've seen two cruisers waiting at a red light watch a car go right through it and do nothing. I'm not complaining or anything, but that guy is defo wrong that cops will ticket anyone.
On a public road I almost always come to a full stop, and if I don't I recognize it as an error. That's the law and there are very good reasons for it. It's an unambiguous standard of performance, for example. Arguments for rolling stops based on personal utility are selfish, IMHO, and arguments pleading utility to others are disingenuous--the rolling stoppers say that it's safer to rolling stop because of the rolling stoppers? Please. Think it through. :-)
(Part of the reason I do it is because as I age I would like to ingrain habits that will make me a safer driver even as my cognitive ability declines.)
Near me there is an intersection where the same cars drive through on a daily basis and where the drivers have habituated themselves to rolling stops. Yes, it's almost always fine. But I have been almost T-boned twice, and was hit once, fortunately with minimal damage. And even though they do not have the right of way, their habit of rolling stops regularly pre-empts the actual rules of the road, and they cut off drivers who have the right of way.
That this is due to the normalization of deviance is abundantly obvious.
One thing that pops up into mind - I wonder if there's a difference between drivers of automatic vs manual gear shifts in propensity for rolling stops.
I'm struggling to respond constructively to your comment. Think about all the phenomena that make up "cognitive ability" and all of the possible dimensions and properties of "cognitive decline".
It should be obvious that your advice is not in any way useful or actionable. And rituals are a very well accepted strategy for dealing with situations which demand consistent and good attention, where human cognitive variability causes problems.
A couple of famous examples--checklists used by aircraft pilots, or the pointing rituals used on Japanese railways.
When I was a kid and we were living in Vancouver, my mum told me how she did a 'rolling stop' through a stop sign at 4am in the morning with empty streets, and a cop car noticed it and gave her a massive fine. She was in tears because my family wasn't earning much money at the time.
For some reason I still remember this story decades later and always come to a full stop at stop signs, that's how I was taught in driving school and that's how I will always do it. I'm a calm driver and I don't care if people are annoyed by me, I will drive the speed limit and I will not rush anywhere, it's not my problem that someone else needs to speed because they woke up late, safety should be the number one priority when you're operating a vehicle.
A) Cops don't work such shifts for the weeks/months it takes to get a court date.
B) Cops don't normally "show up" in traffic courts. They can often appear via electronic means.
C) Cops have great respect for courts. If called, they show up.
D) If your only defense is the faint hope that the cop doesn't show, the judge is going to rain hellfire on you for wasting everyone's time. Court costs. Amended tickets. Contempt.
Unless you are about to lose your license, you don't want to roll that dice. If you go to court to fight a traffic ticket you better have an actual defense.
I’ve shown up to contest all but one traffic ticket I’ve ever received*. It’s up to the state to prove what I’m being accused of; I am not required to admit to it or passively pay the fine without appearing if I choose.
I’ve won many (outright or gotten reductions to non-moving violations) and lost some, but I’ve never had a judge “rain hellfire on [me]” or anything even remotely similar to that, even in cases where I pled “not guilty”, listened to the state’s case, and presented no argument in my defense.
* I got a camera ticket in Switzerland on a business trip that I did just pay rather than traveling back to appear and undoubtedly lose.
In this case you should go and ask the judge for a smaller ticket regardless. Getting a ticket so large that we are talking about it on hn 40 years later? Go to court.
Those are traffic cops working daylight hours who pull over many cars and it makes sense to stack those cases. We have a 4am case with a very high bill. If the officer keeps that shift chances are they are not going to want to show up 9am one day before going back to nights the next.
It may not have been. Can the case be made, is there proof, will anyone show up? Contesting a fine takes time but the judge can reduce the amount if it was unusually high.
enlyth's mother did say she rolled the stop sign, so unless she's lying, it's clearly a justified fine.
I wouldn't hesitate to break traffic rules at 4 o'clock in the morning if there was zero risk of harm - especially while bicycling I'm very flexible, but I also wouldn't complain if I got fined breaking the law. The law must be upheld even at night.
I wonder if that might be a function of how roads near you are designed. I live in the Bay Area and there are definitely certain roads and intersections where I don't feel safe not coming to a full stop and looking both ways. It might be because of hills or that the other direction has no stop sign, etc. Maybe where you live roads have better visibility but at least in my experience in my part of the Bay Area, I often find myself needing to be very careful at some stop signs and has consequently been doing complete stops.
Some people have never seen sidewalks, tall hedges, or kids riding bikes on said sidewalks blocked by said tall hedges, because they don't exist in modern suburbia, particularly in certain climates.
IDK about the US, but in France the speed limit is a target speed; if conditions are good (weather, visibility, traffic) you're supposed to drive at or near maximum speed. It's illegal to drive too slow, and there's a specific fine for this.
The law that you linked says that drivers aren't allowed to drive at an unreasonably slow speed, and defines that to be (for highways, with good weather, and just on the leftmost lane) 80 km/h.
It doesn't say that the speed limit is a target, or that you're supposed to drive at or near it.
Yes, but it's how we're taught at driving school. I think it's written down somewhere but can't find it at the moment, so the article I linked to is the closest I could find.
It doesn't matter. People have all kinds of things they are taught about driving from their parents, instructors, and even official documents that don't carry legal weight (e.g., a highway patrol website).
Even when these types of ideas are good ideas, they aren't binding and you can't count on others to follow those rules. The only true rules of the road are the subset of the laws that are enforced; where enforcement might be done by law enforcement officers, civil judges, or insurance company adjudication processes.
In the US it’s legally a max. I have seen — although rarely — entire lines of cars pulled over and ticketed for speeding on an interstate (limited access, multilane divided highway, max speed 65-70 mph (104-112 kph)) Driving too fast for conditions, is a different fine, so if it’s foggy and you’re going the posted limit, you could also get fined.
You can get ticketed for driving too slow, but I’ve never seen it. I’ve only seen a minimum speed posted on an interstate (45 mph (72 mph)), but conceivably you can get ticked anywhere for impeding the flow of traffic.
> You can get ticketed for driving too slow, but I’ve never seen it.
I have (very gently) forced a car off the road once in Germany with the police on 112, a very elderly gentleman was doing 30 Kph on the autobahn and caused one near miss after another. Police came and helped him to get home, we talked for a while and it turned out that it was his first trip in a long time to go and see his sister in another town, he'd gotten lost and was frightened out of his wits by all the traffic zooming by.
I don't know how it ended, he probably kept his license because clearly there was no officer around to witness the event but I'm pretty sure he avoided the autobahn after that.
This is incorrect (which is lucky, because it would both be extremely unreasonable and impossible to enforce). You can only be fined if you are driving so slowly that you are causing an obstruction.
Australian road rules: http://www5.austlii.edu.au/au/legis/sa/consol_reg/arr210/s12... -- key paragraph: "a driver does not unreasonably obstruct the path of another driver or a pedestrian only because [...]
(b) the driver is driving more slowly than other vehicles (unless the driver is driving abnormally slowly in the circumstances)." (Note that these rules form guidance for the States, so there will be per-state differences -- see below)
> This is the company that decided to have its cars not stop at stop signs. Normal rules don't apply to Tesla.
It's a driving profile which must be turned on that implements behavior extremely common among the driving public. My Honda with Traffic Sign Recognition and adaptive cruise control lets me set the speed above the speed limit, too.
Our traffic laws are in many ways ambiguous, such that even the police officers enforcing the laws will readily admit you're allowed to drive above the posted speed limit. When traffic rules are interpreted ambiguously, it's not that strange for a driving profile to do the same.
I think there are two arguments here: legality, and actual safety. I would argue that the slow rolling stop is perfectly safe in all cases, but the law exists as it is only to reduce ambiguity for enforcement. It's not a calculated risk - there is no circumstance where it's unsafe.
In the UK stop signs are rarely used, they are in places where visibility is so poor approaching the junction that it wouldn't be safe not to stop.
There are definitely places where it isn't safe to roll through a stop sign, unfortunately in the US the system uses stop signs as a kind of traffic calming measure so then there isn't a clear way to mark the places where stopping is absolutely needed for safety. Of course in theory drivers should be able to notice the lack of visibility and drive accordingly but in practice the hazards can be somewhat subtle and many drivers have poor judgement.
It is definitely unsafe to enter an intersection where visibility is compromised by an obstruction, hill, or curve. Rolling such stops is dumb even when you don't see anything to stop for.
It could also be that being slightly controversial and possibly factually incorrect is by design in the "any press is good press"/"it's better to ask forgiveness than permission" kind of way. When you are a relative newcomer going against legacy incumbents with much larger ad budgets and fighting to live another day such guerrilla marketing style tactics can be advantageous.
The trick is to realize when you've jumped the shark and are now the established brand and wean yourself of the "fast and loose" approach.
I have felt since day 1 that "autopilot is better than the average driver!" is the most misleading fact/statistic in the industry.
The "average driver" involved in an accident includes drunk drivers, road ragers, habitual speeders, teenagers, people driving in bad conditions (rain, snow), people driving without sleep and several other risky groups that most people buying Teslas aren't part of. To be worth it, the car has to reduce my personal accident risk under the conditions I usually drive, otherwise I prefer to keep my own hands behind the wheel over handing over control to an algorithm.
I don't disagree with your broader statement about autopilot's safety relative to other drivers, but...
>The "average driver" involved in an accident includes drunk drivers, road ragers, habitual speeders, teenagers, people driving in bad conditions (rain, snow), people driving without sleep and several other risky groups that most people buying Teslas aren't part of.
... what? Tesla owners absolutely can be habitual speeders, people who drive without sleep, road ragers, and drunk drivers. Can you please explain your thought process behind this claim? I really don't understand why anyone would assume that Tesla owners are less likely to have those traits than non-Tesla owners.
Hmm.. The underlying studies (Baum, 2000; Eensoo et al., 2005) that found links between socioeconomic status and DUI/DWI did so by comparing those arrested against a control group – which I believe should deal with that concern?
In my person experience, the "drunk drivers", "road ragers", "habitual speeders" groups seems to be over-represented in expensive cars rather than cheaper cars. Poor people cannot afford to crash their car while rich people can.
Data released by the DOL and insurance companies does not match with your personal experience. There's a reason 18 year olds driving shitty cars pay through their nose for liability coverage.
I read it that way at first too but I think the reasoning is actually correct. The population of Tesla drivers will of course resemble the overall population of drivers but most drivers are not sleepy/raging/drunks and by extension “most people buying Teslas aren't part of” that group either.
To use an analogy: the average person has 1.9 legs but a product which results in its user having 1.95 legs is not an improvement for most people.
> Tesla owners absolutely can be habitual speeders, people who drive without sleep, road ragers, and drunk drivers.
The average Tesla owner may well be those things. But I know that *I* am not those things.
I know my own driving record well. I know my own accident history well.
So you don't have to convince me that Tesla is better than the average driver. You don't have to convince me that Tesla is better than the average Tesla owner. You have to convince me that Tesla is better than *me*.
Now, apply that same analysis to the 88% of drivers that think they are above the median [1] in terms of driving safety. It doesn't matter if I am _right_, it matters what I think about my relative driving safety.
For your specific question, actuarial tables for car accidents/deaths are available publicly, and the profile for the typical Tesla owner (middle aged, high income, driving expensive sedan/SUV) is considered much safer than average.
Moreover it is true in general that most people driving aren't drunk or high, aren't reckless etc., and so most Tesla drivers aren't either. The number of accidents isn't uniformly distributed among the public.
More broadly though, I never said that zero Tesla owners (or any other specific car drivers) do any of these things, but that using autopilot with an above average safety record can still mean that the personal safety of an individual or group of individuals goes down. For the system to be a net benefit it has to be pushed to the below average drivers.
For an individual person to be willing to adopt autopilot on safety grounds, autopilot has to be safer than that individual person's driving, not just safer than the average person's driving.
If your driving skill is above the mean (because you're never a road rager, speeder, etc.), then you're worse off using Autopilot, even if Autopilot is as safe as the average driver.
Since most people probably consider themselves above average drivers (and in fact most people may be above the mean if the bad drivers are outliers), this limits the number of people who will believe that Autopilot makes them safer.
If I remember correctly, they also cover the easy part of the road and make the human take over at tricky situations.
So at the end of the day, you have spotless records for the autopilot that was driving many many miles on the straight line and humans having accidents at short intersections or construction sites. This translates into very favorable numbers for autopilot when presented as accidents per miles driven.
If it were the case that autopilot was benefiting from dumping all of the difficult parts on the humans, you would expect to see much higher than average rates of accidents per mile on the human driven parts, because they're only getting the hard parts. This tends not to be true, so I'm doubting it's a real effect.
Humans don’t disengage and let the machines handle the situation but machines do it all the time. Do the machines disengage at tricky situations or straight lines?
The statistics are there to read however, and suggest your reasoning is not correct.
Case A:
Non Tesla Human drives 999 easy miles and 1 hard mile
Case B:
Autopilot drives 999 easy miles, and human drives 1 hard mile
If the effect size of the hard mile is so large that its skewing the statistics, you would expect the Telsa human driver to have a horrendous per mile accident rate relative to the non tesla human driver. What's most likely is that the accidents following a human handoff get correctly allocated to the Autopilot and the effect size being described is not actually that significant.
Student drivers (with an instructor in the car, and secondary controls) have very few accidents, if any, not because of their safety record but because every mistake is saved by the instructor. A car can be considered as driving "safer than a human" when it can match the average human in any conditions the average human drives. Everything else is just squinting at the data and choosing an interpretation that fits your personal opinion.
A Tesla can relatively safely cover traffic like divided or controlled access highways. That's a very narrow slice of all possible driving situations and not one responsible for most accidents.
> Student drivers (with an instructor in the car, and secondary controls) have very few accidents [...] because every mistake is saved by the instructor
Is that the only reason? I suspect that many (most) student drivers are also in general more alert, drive slower, and follow each rule/guideline to an extreme. If they don’t the instructor will probably end the lesson and remove a dangerous driver of the road.
All student drivers particularly at the beginning are "dangerous drivers" because they aren't drivers but they're asked to drive. In their first half of driving school most students would probably cause an accident within minutes if not for the instructor. I live on a quiet street where most driving schools take their students due to the almost non-existing traffic. I can't count how many times I see the instructor having to intervene to avoid a crash even at simple things like keeping a straight line or a right turn (too tight or too wide).
So as a thought experiment, if Autopilot + Human intervention reduced the rate of accidents by 50% vs. just humans after normalization; can we consider autopilot to be adding value?
Of course the Autopilot adds value, all such driver assists are there to add value and help the driver on any car. I just don't think Autopilot can drive, let alone drive safer than a human. There's a difference between "helping a human drive safer" and "driving safer than a human". This is a confusion many people make when reading these stats, to Tesla's benefit.
Also the oldest Tesla with AP is less than 10 years old (with an average of 3-4 years given the sales trends). The average age of cars on the street in the US is over 12 years old. An accident statistic that looks at reasonably new, premium cars against everything else won't paint the correct picture.
Is this data available anywhere? How do I know that non-autopilot Tesla mile per accident rates aren't horrible?
Tesla publishes their own data for the safety of autopilot, which I presume is based on their own analysis of accident records. Is this same detailed information available to other groups (insurers, NIST, etc)? Or do they just calculate an aggregate "Tesla mile per accident" rate that is a blend of the great autopilot rate and horrible human rate?
I'm not trying to be facetious here. I have no idea if this data is available to any groups other than Tesla themselves. And if so, do they publish those numbers?
Tesla releases the miles per crash rates quarterly, for autopilot and non autopilot cases. Autopilot crashes include anything within 5 seconds of disengagement. The human rate tends to be more than 2x worse than the autopilot rate. This is not normalized for factors like road context.
The human rate for tesla driven miles tends to be ~4x better than the other brands' average. To precisely answer this question you would want to see both a comparable brand's humans' performance; and probably the split of humans who used autopilot and humans who don't ever use autopilot. We don't have that, but in my personal opinion there's enough evidence to suggest it's probably not a grand conspiracy. I'm of the opinion that autopilot being ballpark on par with other drivers is more than enough to reduce accidents substantially, at scale.
It’s not just that. There’s a context switch when the autodrive disengages, so the human is actually less ready for the hard mile than if they were driving the whole time. Sure m, the human is supposed to be able to take control at anytime, but I don’t think that really happens. The whole purpose of autodrive is do you don’t have to pay attention and drive.
Humans don’t crash at every tricky situation but Tesla claims that humans are horrible drivers and their cars gives the control to the humans when when something happens.
Literally any distribution of hard miles and easy miles will produce the same outcome in this thought experiment, to varying effects, if your premise of Tesla hiding the AI's incompetence with passing off to humans when driving is challenging is accurate.
If your premise is to be believed as a significant effect, you must also accept that this outcome should be visible in the data
Your conclusion relies on the - quite faulty - assumption that "situations that inherently difficult for FSD to handle" are automatically "also more dangerous for human drivers". In snowy conditions, humans do just fine, generally, at following "lanes", be they the actual lane, or the safest route that everyone else is following. Humans are also capable of deducting lane direction, orientation, even when there are contradictory/old lane markings on the road, a situation FSD regularly causes danger in.
Or that that negative effect is lost in the orders of magnitude of "all human drivers across all miles" versus FSD.
Just some reasoning really. Statistics and proper normalization are hard.
Tesla tends to say that autopilot crashes occur 1/2 as often as non autopilot crashes. That's likely not normalized to road conditions. But if you assume that Tesla is secretly just putting all the hard miles on the humans, then that would imply humans are driving many more hard miles and should have higher accident rates. The autopilots meanwhile must be performing worse on the easy miles and racking up additional accidents that wouldn't have otherwise happened.
If you combine those two, the overall rate of accidents should be higher than average, but it's actually lower by a fair margin. Again, normalization is hard.
Ideally you would be able to compare human drivers of another comparable car brand to the human drivers of Tesla to confirm the Tesla drivers don't seem to be being judged on unreasonably difficult conditions.
There was a sourse but I could not find it ATM. It’s fairly simple, people don’t disengage and their driving safety is judged over all the miles they drive + all the situations where Autopilot disengages.
Tesla Autopilot is judged only by the miles driven without disengagement, which is quite limited actually. You can watch Youtube videos to see at what kind of situations Tesla autopilot gives up.
There’s no situation where the Autopilot takes over from the human saying “That’s a tricky road, let me handle it”.
You seem to be missing the point though. If this were significant, then human tesla drivers should be shown as performing much worse than other car drivers, because you're claiming they have a disproportionately large riding time in "tricky roads".
A non tesla driver should be doing way better because they get to pad their score with the easy roads the autopilot supposedly gets.
How it's other car companies fault that Tesla isn't publishing data so we can check if Autopilot miles are mostly on straight lines and human miles are at tricky situations?
Tesla publishes the human accident rate, and the autopilot accident rate.
If another comparable car company published their human accident rate, and you were willing to assume that the baseline human accident rate should be the same for both brands, you could evaluate your hypothesis that Tesla is shoving the tricky situations onto humans.
As mentioned elsewhere, just because a situation is difficult for FSD to parse and process doesn't inherently make it a dangerous situation for a human driver.
I've been using taxis in my city for like 50 times, and I've never even felt remotely in danger, except when I was in a Tesla taxi. The driver were well composed, calm and seemed rational, but we've had 5 (!) near crash situations in a 15 minute drive, simply because the car can accelerate into spots opening up in moderately dense traffic so quickly that nobody can expect it.
The car enables a completely novel driving style which people need getting used to.
> several other risky groups that most people buying Teslas aren't part of.
This is a fallacy. Ask any demographic you like if they're road ragers, drunk drivers, etc... and they'll deny it. Everyone is sure that rare and terrible outcomes will never happen to them because of their own behavior, including the people to whom it happens!
It's probably true that middle aged premium car owners are safer on average, but you don't get to rule that stuff out by fiat. In fact there have been a few "Autopilot Saves Passed Out Driver" stories over the past few months, where Tesla drivers were clearly impaired.
As for this particular paper: this is just P-hacking folks. You can't take a vague dataset[1] and then "correct" it like this without absolutely huge error bars. Why correct based only these variables? Why did they all push the results in one direction? Why not gender? Why not income? Why not compare vs. like cars?
This isn't good statistics, it's just more statistics. If we want the real answers we should get a better data set (which, I'll agree, would likely involve some regulatory pressure on manufacturers).
[1] And, to be fair: Tesla's safety report is hardly comprehensive and provides no data other than the aggregate numbers.
> You can't take a vague dataset and then "correct" it like this without absolutely huge error bars.
That ship has already sailed; Tesla makes active safety claims based on that dataset. To hold research to your standard here would be to say that Tesla can make its claims, but nobody can challenge those claims.
> Why did they all push the results in one direction?
If you have two correction steps, then a priori you'd expect a 25% chance that they both act in the same direction. I don't think this is very remarkable.
> Why not gender?
This probably would be a reasonable addition. I doubt it would change the results much, but we have the well-known fact that insurance rates differ for men and women, so it may be relevant.
> Why not income?
First, is this data even generally available? If the data doesn't exist, then we can't control for it.
Second, should we expect crash rates per mile driven to differ greatly by owner/driver income? A priori, I wouldn't think this demographic quality to have a strong impact.
> Why not compare vs. like cars?
I think "personal vehicle" is a reasonable comparative category. Would we expect collision rates to differ greatly between more specific categorizations? For the sake of the overall conclusion, would we expect Tesla-equivalents to be particularly crash prone in the broad dataset?
Any statistical analysis will have its limitations, but when we're talking about life-safety claims from a manufacturer I think we should have wide latitude to look at critical evaluations of the original data.
> That ship has already sailed; Tesla makes active safety claims based on that dataset. To hold research to your standard here would be to say that Tesla can make its claims, but nobody can challenge those claims.
I don't follow that logic. Claiming that Tesla's data set is incomplete is fine.
That doesn't mean that any junk science that refutes it must be correct. Bad statistics is bad statistics. This kind of "correlate dissimilar data sets by using more dissimilar data" is a terrible way to do science. Quite frankly it's almost guaranteed to be wrong. There are regulatory bodies out there with access to real data. Get them to figure it out.
I just think a lot of Tesla drivers see themselves as elite drivers or something when we all know that's not true if you step back away from the situation.
In my case, I'm happy to engage Autopilot even though I'm under no illusion that it makes me safer. It's just a very useful convenience feature.
OTOH: Tesla could still claim that if you replace all the distracted/drunk drivers in beat-up cars with new Teslas running Autopilot, the roads would be safer.
But that's not really contradictory, is it? If you believe that you're a better driver than autopilot, just don't use it, or buy another car. It's not at all dishonest or deceptive marketing (if the claim is factually accurate, of course).
I agree with this, and have also been accused of being a Tesla hater, but OTOH someone has to pave the way for the future in which none of /those/ people have to drive either, and our cities aren't taken over by surface parking for cars that move two or three times a day at most.
If we're not gonna switch to sane urban planning, may as well bring on the robo-taxis that can go anywhere
I think it's fair to compare to the average driver. You may be not average by virtue of not being a teenager and not driving drunk, but you're not magically immune to being run over or crashed into by one of those. So even if a hypothetical FSD capable car would not drive one yota better than you do, it cwould still make you safer by virtue of making the surrounding environment safer.
I'm not going to boast about how good of a driver I am (I am!!), but lets say I am a median driver. I would want a FSD to be better than a median driver. Not the average crash/mile stats that include all kinds of long tail idiots doing the most damage.
* ... alcohol-impaired driving crashes, accounting for 28% of all traffic-related deaths in the United States.
* Drugs other than alcohol (legal and illegal) are involved in about 16% of motor vehicle crashes.
So 44% intoxicated to some degree (unless the stats overlap). That number is not even including reckless drivers not driving under influence, teenagers that just got the license, bad health elderly etc.
If you are speaking about the safety features in a new car, then I won't disagree. However, if you are implying that those with a nice new car are somehow better drivers, more attentive drivers, or care more about their car then I would highly disagree. As someone who purchased a new car last year, and does greatly care about it, I'm routinely shocked by people and their clear disregard for their own vehicle. Reckless driving, texting while not even attempting to look, and the way people park their cars so close to you that they can't even get out* have all shocked me now that I'm paying more attention.
* I have even stared at this guy struggling to get out of his fancy Jeep who opened his door into my car. I know this anecdotal, but all it takes is a few counter examples to show that car price etc does not have a super high correlation to how careful & considerate you are.
Look at the accident and fatality rates for drivers of luxury cars. They are much, much lower than the same rates for less expensive cars. In fact, if you break it down by vehicle model, there are several models with 0 fatalities at all.
I'd be curious to see where you are able to find good data for that. To be honest, I haven't found data that shows a clear difference between luxury and non-luxury cars
>> "average driver" who just bought $40-50K NEW car.
Driver and buyer/owner are different things. Few teenagers ever buy a Tesla, but they certainly drive them. A car cannot only be safe in the hands of the rich initial buyer who lives in a nice climate. It must be safe in all the other people who that owner may let drive. It must also be safe for subsequent second and third owners, people who might not be wealthy enough to stay home when it rains or when the autopilot thinks conditions are too rough. (It was -35c on my drive to work this morning. Dark. Ice fog then blowing snow.)
Have you been to LA, SF, Vancouver or Denver? These are Teslas, not Ferraris. Lots of teenagers are driving their parent's tesla. Go look at any university parking lot and you will find plenty of 50,000$ cars.
Sure the worst drivers out there utilizing assist tech like autopilot would benefit everyone, but data shows that they aren't the ones spending money on fancy cars. My comment was about me making a decision for myself.
Plus in your hypothetical the autopilot can drive at least as well as me, which is also a big assumption and not something I believe with what I have seen so far.
My problem is that the median driver is much much better than the mean driver and tesla os comparing against means.
The damage done in a car distribution is heavily skewed towards people that get in fatal crashes or total a car. I'd randomly guess like 20-30% of drivers are below mean. Most people won't total a car, vast majority will never kill anyone.
If you get median people into an autopilot car that's got a mean safety record we end up pulling the average down, the road becomes less safe. And luxury sedan buyers tend to be one of the safest demos on the road, which makes the problem even worse.
Huh. Citation needed that drivers who buy, say, a Model S, particularly one with Ludicrous mode are less likely to be "habitual speeders" than the average driver.
This has come up before. When the new FSD beta started, people started claiming that safe driving was a function of vehicle price, and therefore Tesla drivers, especially those who had paid for FSD, were more likely to be safer. When I noted that my current vehicle costs more than a Model S, based on the logic, Tesla should be recruiting me to beta test FSD, well it was hard to find a refutation.
Wow, adjusted for driver age the difference vanishes.
That's quite an interesting observation, and it shows the importance of proper statistics. It also raises questions about companies producing their own stats, it's often only when someone takes the time to dig into it that we discover this kind of thing.
I am not holding my breath waiting for the common consumer to understand statistical critical thinking though. I sometimes catch myself forgetting the due diligence even after many years of forcing myself to think about stats.
Perhaps some sort of ombudsman could do this, pointing out when stats are lying to people.
> It also raises questions about companies producing their own stats
Companies saying "trust us, we have the data to prove it" while keeping that data secret should be ignored. Especially if their CEO has a long history of blatant dishonesty! I'm most annoyed that people gave any weight to Tesla's safety statistics claims in the first place.
> I'm most annoyed that people gave any weight to Tesla's safety statistics claims in the first place.
TSLA stock consistently made gains for a lot of people who otherwise wouldn't be paying attention let alone putting their weight in anything Tesla related.
> Companies saying "trust us, we have the data to prove it" while keeping that data secret should be ignored.
While true, I do not think this is a great heuristic. How would the average person know whether the "data" is available? Most people don't even know what "data" really is. They have never heard of CSVs or SQL. We need a better signal for consumers to know what they can and can't trust.
An average person doesn't need to directly access the data. As long as journalists or researchers can, an avg person could rely on their analysis/opinion instead.
I would certainly not rely on journalists' to actually have a look at the data, let alone to have the technical skills to analyze and report on it. As it is, reporters don't bother mentioning / linking to the data, or mentioning the lack of publicly available data.
Data-journalism is actually a thing, and bigger organizations like the New York Times have done some pretty impressive work building tools and training programs for their reporters.
A good journalist will include that context in a story. I know I read at least one article that included these safety claims while also noting that the data was private and unverifiable.
I think it's a good default to assume any self-beneficial corporate message is false, absent evidence to the contrary.
Companies dont validate their claims, only provide data and statistics to back up marketing angles. There's no point to it either since it would still be a conflict of interest when you've researched your own product and found no errors in it.
Driver age isn't the factor that explains the difference, accident rate adjusted for age shifts up a similar amount for both Autopilot and regular drivers.
What truly makes a difference is the kind of road they're driving on. The accompanying paper provides a figure showing Autopilot is essentially only used on freeways which are inherently safer. Adjusting for that seems to explain most of the difference.
Though personally I'd prefer to have just compared the accident rate on freeways for both types rather than this weird weighting method they try to use.
If you have taken a course in statistics you'll know its always possible to make the numbers looks favorable to your cause. It would take several unrelated blind third parties to do this correctly and no public company will ever let that happen, especially Tesla.
This is an arrogant amount of speculation based on nothing more than personal intuition.
In fact, there are plenty of examples where public knowledge of safety is published by mandate of the government. Here is an example of FAA flight data and you can see details about incidents across airlines: https://www.faa.gov/data_research/accident_incident/
I'm not going to spend time digging it up, but I'm pretty sure there are just as many sources of data on other important things such as pharmaceuticals e.g. the covid vaccine.
So no, just because 'stock price go down' doesn't mean a thing will never happen
This isn't "proper statistics", this is P-hacking. Tesla's data is incomplete and bad. But going out and finding confounding variables to confirm your priors isn't the treatment.
A number of these near-crashes seem to be pretty odd errors (a couple of times it just decided to drive on the wrong side of the street?) -- albeit ones that one _could_ see drivers unfamiliar with the area making.
If a human driver started driving on the wrong side of a street, etc -- at least in Boston -- they would be greeted by a chorus of horns and rude gestures from those around, which would hopefully indicate to them that they were in error. Brings up an interesting question: does anyone know if Teslas are monitoring for surrounding horns/etc?
Another common problem in Boston is being blocked by geese crossing the road. Are Teslas able to detect such low, slow-moving obstacles?
It seems like it's not super feasible to rely on self-driving vehicles in a dense city like Boston, but maybe such features could be geofenced to areas like highways?
This is interesting. I have seen similar videos of Tesla self-driving successfully on similar type of streets in Seattle. I wonder if the quality of FSD changes from city to city based on how much data Tesla receives from other drives in that city. I guess I'm suggesting that Seattle could have more Tesla drivers than Boston. I have no idea if any of this is true. Just a thought.
I now wonder how the quality of FSD changes when you're cruising in a road with blue houses vs. red houses on the side. I'm not being snarky here, that's the kind of thing "AI" is known to do.
It for sure does. Where I live, Oxnard it fails so badly I don't know how I was allowed into the beta. But if I drive down a few cities over it works well enough that if I didn't know better I'd say it's ready for public. It's just wild.
That is wild. I know nothing about ML, but I would think that the driving data Tesla sucking in from various places supposed to improve overall driving regardless of the location. At least within the same country. Would be interesting to know how this actually works, but I'm sure it's all secret sauce we'll never know much about.
I will have to stop by Oxnard and check it out. I’d been thinking that I must have an atypical experience since FSD is quite wonderful for me. But if a few dozen miles over and you get a city where FSD becomes awful…
Along Gonzales rd it struggles when taking turns, in particular on and from Oxnard Blvd and Rose Ave. Every time I drive west past Ventura rd it violently swerves when the lanes shift next to the Popeyes. For fun try the residential areas around. It does 'ok' but often will do flying left turns where it doesn't even bother to slow down even though theres cars parked blocking visibility.
Westlake on the other hand haven't really had an issue. Thousand Oaks has been pretty smooth as well.
In those interventions it's not 100% clear the car would have generated an accident. They get very "close" so the driver intervenes. And of course this is the safe thing to do. But highly sensitive AI driving should be able to come much closer to crashing than the average human, without issue. Perhaps it's the drivers that will need to adapt. Is Tesla accounting for this fear factor already?
I see what you’re getting at but no I don’t think AI driving is meant to get much closer than the average driver. I think in some cases driving on the wrong side of the road and drifting or not detecting objects is certainly 100% failure without need to experience a crash. An AI needs to drive at safe distances just like humans until and only when there are no more humans behind the wheel.
Those roads seem much easier to drive on than basically any city I've been in Europe. Paris is 10x more complex than that for instance. If FSD can't handle that there's really not of a use case apart from highways in Europe.
To give you an idea of the incredible features in some cities, I have recently driven in Rennes where they found a good idea to makes roads with one single central lane for cars with wide bicycle path on each side. You are supposed to drive in the center and if a car comes in the other direction each car needs to go on the bicycle lane on their respective side of the road while giving priority to bicycles. I wonder how FSD would handle that...
Iirc that concept actually comes from the netherlands. I’ve seen videos on Dutch road design showing that, i had no idea they’d been implemented in france.
Interestingly, the routes in that video are in one of the few parts of Boston with a street grid (Southie: https://twitter.com/TaylorOgan/status/1488555262655045637 ) -- so I wonder how much _worse_ it is on the cowpath roads elsewhere...
How is it that Tesla (and other companies developing self-driving capabilities) are allowed to deploy their products in the real world? We make people take a test to get a drivers license -- why don't we have requirements for self-driving systems? We require that cars pass routine mechanical inspections -- why isn't the software controlling these cars regulated?
Exactly. Marketing: "Self driving!" "The driver is only in the seat for legal purposes. The car is driving itself." Legal: "Maintain control of the vehicle at all times."
Same with Summon. Marketing: "Bring your vehicle to you while dealing with a fussy child!" Legal: "Maintain attention and focus on vehicle at all times. Do not use while distracted."
Which I don't understand. Marketing is part of what makes the product. You can't simply promise stuff on your website and then put something else in the fine print. If you market the thing as (almost) self-driving, then the fine-print doesn't really matter. It is about customer expectations.
Most regulated products require post market surveillance. You have to actively monitor the market and how consumers behave. If they have a misunderstanding about the intended use of your product, you're overpromising. Which is clearly the case here if Tesla did their post market surveillance correctly.
I've yet to hear anyone who bought a Tesla say they didn't realise what they are getting. Searching HN there are 600 linked articles on Tesla self driving. It's not really a secret.
Regulation hasn't caught up with the technology, and even when regulation exists regulators are very slow to enforce it. Uber and Lyft were various degrees of blatantly illegal in many places they operated, but it took regulators years to actually do anything about it and at that point they were so well-established that it rarely made sense to actually punish them. It's a reliable bet to make if you've got investor millions to spend on lawyers and lobbyists.
This is the point that bothers me the most. Everyone is so fixated on the things they can see and talk about (i.e. regulate). The complex mountain of software running on these cars is the entire bucket of liability.
How do you mandate software quality? I don't think testing alone is enough if you seriously care about safety. Any test can be gamed as tesla has aptly demonstrated so far.
Dunno - what Musk says that you can judge it on whether it's safer or not compared to humans seems ok to me. There's no practical way really to go through the mountain of code and conclude much. But maybe the judging should be done by someone independent rather than Tesla judging themselves.
There are requirements in some places, and maybe there should be more.
But on the other hand, even the most pessimistic examination of this data doesn't seem to suggest that these cars are below the "can get a drivers license" threshold.
Autopilot (called Autosteer in the UI of my Model S) is not full self driving. That licensing issue is a different discussion altogether and doesn't really have much to do with the issue at hand..
Its kinda amazing to see the cognitive dissonance that people have when it comes to "regulators".
When these regulators get swayed by the car industry into stupid stuff like 25 year import rule, or how they still keep speed limits around which have nothing to do with safety as they are based on outdated MPG savings initiative and end up disproportionally affecting lower income people, they rightfully are criticized.
However, as soon as some manufacturer pops up and does something that the public doesn't like, even for silly reasons like not protecting people from themselves, people immediately jump to wanting the "regulators" to do something about it.
Kinda makes it easy to see who has irrational hate for Tesla/Musk versus those who actually care about technology.
The correct thing for the "regulators" to do is to make it mandatory that every car has the same set of cameras and instruments that Tesla cars do, and that data is streamed to a publicly funded data warehouse where its open source and available for anyone to use.
People with different opinions than yours are not a monolithic entity, so please don't paint me with the same brush as others just because my opinion has more similarities with theirs than with yours. Similarly, regulators are not a monolithic entity, and it is completely possible to want regulators to do one thing while also criticizing them about other things. Otherwise, your post has big "You criticize regulators, yet you want them to do something? Curious!" energy.
> Kinda makes it easy to see who has irrational hate for Tesla/Musk versus those who actually care about technology.
Another broad brush stroke and false dichotomy. It's entirely possible to care about safety before caring about TSLA's shareholder value, which has nothing to do with "irrational hate" or not "caring about technology".
For example, you won't see me complaining about Waymo's self-driving cars, because their program prioritizes safety instead of moving fast and breaking things/bodies. Does that sound like not caring about technology, or can you only care about technology as long as you're completely uncritical of Tesla's blatant disregard for safety?
> The correct thing for the "regulators" to do is to make it mandatory that every car has the same set of cameras and instruments that Tesla cars do, and that data is streamed to a publicly funded data warehouse where its open source and available for anyone to use.
That's just your opinion, and it's certainly convenient for Tesla. The "correct" and ethical thing for Tesla to have done is to keep their beta testing to closed tracks where all participants agree to be experimented only after giving their informed consent and just compensation for the risks involved. Instead, the entire public is being experimented on without their consent, and they're being experimented on by untrained non-professionals.
The "correct" and ethical thing for regulators to do is ensure that Tesla ceases experimentation that can get the public killed and maimed, that the public never consented to.
Similarly, the "correct" and ethical thing for Tesla to do is to not try to portray their Level 2 autonomous system as being Level 4 or greater. Regulators, ideally, would handle that issue, too.
The issue isn't about what you want them to do, the issue is that you are expecting a bunch of non technical people that have no vested interest other than monetary gains through staying in office to create some sort of regulation that notionally protects people from themselves based on the supposed advertising of a piece of tech, and that is an incredibly silly position to ever take for multiple reasons.
Tesla clearly outlines what the autopilot is and isn't, and is not forcing people to use it. If you don't wanna beta test, don't use it. Simple as that. No need for any regulations in the first place. If someone thinks that autopilot means they can go to sleep being the wheel, and they crash, its is 100% their fault, not Teslas, or regulators. You don't blame mountain bike companies because someone decides to take their bike of a jump, go tail up nose down, and smack into the ground, even though their advertising often shows people doing big jumps on said bikes. Be consistent in what you believe.
Also, as far as self driving goes, Waymo isn't as much as pushing for safety as it is burning through VC cash in a giant scam. It started 13 years ago and has zero products. Meanwhile, startup shop Comma AI managed to get self driving highway autonomy system out in 5 years that you can actually buy, on a fraction of the budget.
> When these regulators get swayed by the car industry into stupid stuff like 25 year import rule, or how they still keep speed limits around which have nothing to do with safety as they are based on outdated MPG savings initiative and end up disproportionally affecting lower income people, they rightfully are criticized.
> However, as soon as some manufacturer pops up and does something that the public doesn't like, even for silly reasons like not protecting people from themselves, people immediately jump to wanting the "regulators" to do something about it.
Weird how people want regulations that benefit the community and not regulations that are detrimental. Absolutely impossible to understand this.
I also want everything and everyone to just start working like a well oiled machine as humans live in Utopia, but that is a discussion about fiction, not reality.
Nobody working in machine vision is surprised by this.
We are a LONG way from having self-driving vehicles as a general thing, particularly when Tesla wants to rely on MV as their source of environment analysis.
My last job we did CV mostly focused on satellite images. When someone on the team mentioned she had a Tesla the director who set up the team was like "you see the results we get here. Why do you feel safe getting in a self-driving car?"
Regarding some of the conversations about programming something to obey the law or obey human convention:
- it’s tricky because the people on the roads expect human convention and drive for that
- as long as there is a mix of humans that drive by convention, it’s probably best to err on the side of following human convention to some degree
Case in point: center lane
When I first started using autopilot on two lane roads the car would stay dead center in the lane. If a car was coming towards my in the opposite lane, it began to notice humans would veer away from center to provide more buffer between themselves and the oncoming car.
Because I didn’t want to piss other drivers off, I would often disengage and drift to the right of my lane while the oncoming car approached me. If I didn’t do that, there was always a last minute extra drift away from me by the other car.. the conventionally expected buffer distance wasn’t enough to make them feel comfortable due to unexpected (lack of) behavior.
It’s not a law / no law issue above. But I have similar experiences navigation into roundabouts with crosswalks in front of them (autopilot stops at empty crosswalk where normal driver would cruise through until the stop line to check for cars in the roundabout — and if none were there might pass through with a rolling stop/check.
One thing I wonder about the safety of Autopilot (because I have never used a Tesla).
Doesn't the Autopilot sometimes just give up in difficult situations? Either explicitly, by stopping behind parked cars or things like that, or worse, by behaving so badly that the driver has to take back control for his own safety.
The worst case would be: autopilot doesn't see an obstacle, driver sees it at the last moment, takes back control, but it is too late, and crash. Is it an autopilot crash or a human crash?
Or: there is some road work, the autopilot cannot maneuver around it and gives up, the driver takes back control and crashes. It is clearly a human error, but still, the autopilot leave the short part where accidents are the most likely to happen the driver.
It things like that happen, it is almost as if the autopilot is to be used only where accidents are the least likely to happen, which can obviously explain why people on autopilot have less accidents.
Yes, it does, which is also a great way to avoid "being active in a crash". The model of "a human must being completely focused 100% of the time while also not doing anything to control the vehicle and ready to take control" is not a reasonable model, and is why all the "partial" self-driving rules are nonsense.
It also helps statistics both directly: crashes are more likely in the snow and rain, but Teslas don't do self driving in those conditions (that's one of the things this paper addresses).
Tesla has in numerous cases blamed the driver for not taking control of the unsafe "self driving" when crashes occurred, how many near misses have occurred due to drivers intervening? Airlines and aircraft manufacturers are required to report events where a pilot how to take over to prevent an accident but Tesla and co don't
How about the predominately male ownership of teslas causing higher accident rates. I see the paper adjusts for age and road type but missing gender is a pretty big slip-up?
I didn't read the paper fully but grepped for "gender" and "sex" and couldn't find anything.
Don't forget to adjust for the 50% greater distance driven by men [1]. In reality, per mile drive, the difference in genders is negligible and maybe even counter to what most people believe. Of course though, men will pay more in insurance because driving is inherently dangerous and driving that much more frequently is problematic.
Men are also more likely to drive drunk, under the influence and are more likely to buy a car with more than 250hp. There's a _lot_ of confounding variables when examining crashes.
Good question! According to one study in LA, [1] men cause 60% of accidents. I do wonder, however, the extent to which this gap fades with age. Given that most Tesla owners are not super young, it might not make as much of a difference as the 60% number (assuming it is nationally representative) would indicate.
Keep in mind that men drive 16,500 miles per year on average, while women drive 10,100 miles. There is a noticeable gender gap in "accidents per year", but not in "accidents per mile". The paper compares accidents per mile, so there should be little gender effect in the results.
Note when looking at 40% vs 60% of accidents (assuming males and females drive the same amount) the % difference seems small, only 20%, but actually this means males cause 50% more accidents than females do (60/40=1.5).
I'm not too surprised by this and bet a lot of HN folks suspected that this was the case, but I'm really glad to see this come out and hope it gets a lot of visibility.
Controlling for confounding variables is a complicated and often subtle process that many people don't have an intuitive grasp of, so Elon's BS claims have probably been very convincing for many.
As a Tesla driver, I do not trust the autopilot yet and think it is years away (if ever achievable) from level 5...
The reason this study has this outcome can be because of the survivorship bias - people are letting autopilot drive only in easiest road conditions where chance for something to go wrong is smaller to begin with.
I would have hard time imagining people are letting autopilot drive regularly in a tight, windy road (at night) or in snow, thus completely preventing those (frequent types of?) accidents from happening under autopilot.
EDIT: After reading the full paper and not just the twitter graphs, some of what I said is wrong. That is, they also controlled for age, and when controlling for age and road type, autpilot was just worse.
On the one hand I had the same suspicion about the unadjusted data, on the other hand the fact that autopilot isn't worse is pretty promising!
However, average human rates include people driving drunk, tired, elders, teenagers etc.
So at 43 when I am driving sober, I'm probably safer not using autopilot.
When I was 19 probably safer to use autopilot.
> the fact that autopilot isn't worse is pretty promising!
Presumably drivers take over when it makes mistakes, which tilts the stats a bit.
> When I was 19 probably safer to use autopilot.
As a parent, I wonder about what safety tech I would let my kids use when learning to drive. I want to make sure they fully learn how to drive, which makes me think I shouldn't let them use too much semi-autonomous tech.
At the same time, it seems foolish to tell them not to use blind spot monitors, or to even expect them to "check their blind spot" the old-fashioned way if their cars have monitors.
Interestingly, my father (in his 70s) tells me he'll never trust blind spot monitors, so I'm seeing a generational difference even between him and me. Too bad, since older drivers probably could benefit the most from not having to look back when changing lanes. Older eyes take longer to re-focus, so it's more likely an older driver would miss something happening ahead on account of having looked rearward.
I also think it will be important for any future teenager of mine to learn to drive without the regenerative braking that Teslas default to. It’s a much better driving experience IMO, but I worry a new driver won’t learn the “oh no hit the brakes!” reflex if they rarely use the brake pedal. I’d probably force them to learn with the car set to “normal” ICE/idle—style for the first few months.
Is there actually a "normal" style? I've only done a test drive but it seemed like there were various levels of automatic braking, but none that actually allowed you to coast.
> At the same time, it seems foolish to tell them not to use blind spot monitors, or to even expect them to "check their blind spot" the old-fashioned way if their cars have monitors.
I have never owned a car with blind spot monitoring. Do people really just rely on them completely? Even in dense urban environments with bikes around? Or is it more of a judgement call? Different perspectives on this would be nice.
I have never seen the little light on the mirrors fail to detect me while I am riding but I never trust a driver not to just turn right cutting me off anyways.
I am not trying to pass judgement just understand how these driver aids change the way people drive.
It's practially a trope how in sci-fi movies with flying futuristic cars, it's a demonstration of extreme competence for a character to demand "manual controls" to pilot the vehicle...like every normal driver does today. I expect that will become more and more real-world. Even today, most drivers don't know how to drive a manual transmission vehicle, much less how to double-clutch one without synchros or how to stop on ice without ABS. Those are disappearing from public roads and from the capabilities of average drivers, I expect that driving without aids will follow in the same way.
As part of my job, I end up renting vehicles pretty frequently, they're often newer models than my daily and have blind spot monitoring, backup cameras, radar cruise, vision-based lanekeeping assist, etc. It's frightening how in the course of a week you can get used to having to put less effort into centering yourself in the lane, trusting that cruise control will just keep a comfortable distance from the vehicle ahead of you. Driving becomes a lot less stressful. When I get back home and climb into my old manual-everything beater, though, it's quite an adjustment.
Regarding blind spot checks, yes, the Audi I recently drove had alerts for that, the blinking yellow light in my peripheral vision when cars were passing me was a nice reinforcement, but I'm too conditioned to do head checks to skip those. Likewise, reverse cameras - I've driven many work vans, pickups with headache racks, Jeeps with scratched up plastic, pulled trailers and RVs, etc where the windshield-mounted rearview mirror is useless; lots of pros get used to backing up using the side mirrors only. However, I asked my sister in law (who is extremely competent at most things) to drive my truck for an errand and she asked for help backing it out of the driveway - her car has always had a backup camera, which is honestly lots easier and she was completely uncomfortable using the side mirrors.
It's not hard to imagine that someone who only drives with assistive tools would adapt to become dependent on them; I'd argue it's more unusual to expect that they wouldn't!
Nor how to goad an ox team into drawing a plow! What is the world coming to?
Seriously, though, I think it remains to be seen which of these are going to be mainstream requirements that drivers should be able to depend on.
Power brakes and ABS? If those fail on an otherwise well-maintained vehicle, you're likely to be excused of personal/moral blame in a collision.
Merge into someone that your camera-based blind spot driver aid didn't detect? Back into something because you were looking at your rear camera rather than your mirrors? That's your fault, the manual for the vehicle and the text on the rear camera display say so explicitly, and it seems people agree.
There's currently a marketing battle in the court of public opinion over whether radar/vision cruise, forward collision prevention, and vision-based lanekeeping driver aids are in the former or latter category. I think they're more likely to fall into the latter category and remain driver aids only for a long time, it remains to be seen if Cruise-like LIDAR systems are as rock-solid as ABS.
Perhaps pedantic, but also genuine curiosity, thinking about it:
> how to stop on ice without ABS
My understanding is that ABS won't do anything to help you stopping on ice, anyway. On ice, braking is hampered by lack of traction on tires, whereas ABS is trying to avoid the challenges of tires locking up, so you can maintain (some semblance of) directionality.
ABS absolutely does help you stop on ice. Locking up on ice makes traction even worse; you get a miserable coefficient of dynamic friction and a film of liquid water instead of the already poor coefficient of approximately static friction you get while there's limited relative motion between the tire and the road.
It's not a panacea, but ABS definitely reduces the distance you slide compared to if you lock your tires.
Citation: Towed a trailer a few weeks ago in the snow, and even with my beloved Tekonsha P3 trailer brake controller it was a challenge keeping the trailer behind the truck while coming to a stop on curves. It's easy to test the effects of locking the tires on ice with an empty trailer trying to stop a coasting truck...the truck brakes, with ABS, do most of the work.
Driving for ~25 years. Did quite some kilometers when I was between 19 and 30.
My current car is the first one with a lot of assistance systems. I actually see them as additional safety level, but as a first instance still use my senses. Blind spot monitoring is something that I learned to value as a slice in the Swiss Cheese Model of Security [1].
And I detect myself driving extra carefully when driving a car without them. Way more careful than I was driving before I ever had blind spot monitoring.
I've never owned a car with BSM but have rented for extended periods of time. I would expect that after a year or two of using the system while also actively checking my blind spot the conventional way, I would become comfortable with understanding its reliability and limitations (if any) and would adjust my behavior.
Some Hondas pipe through a camera feed from the side of the vehicle when you turn on your blinker, which wouldn't have the same potential weakness for cyclists that you mention with the binary sensor-based systems.
No. Even as a tech geek... I use it as an aid, but not as the primary. Even when I'm driving a fire engine with a dash monitor that has seven always on, always visible cameras around it, I thought I'd use them a lot more than I do. I do use the cameras in my car supplementarily too, helps me dial parking perfectly, etc. But I still find myself doing visual checks.
But I also grew up and had my formative driving learning years without the benefit of such aids.
> So at 43 when I am driving sober, I'm probably safer not using autopilot. When I was 19 probably safer to use autopilot.
That’s very dangerous thinking. Autopilot isn’t self driving and will actively try to kill you, from time to time, like all driver assist systems. 19 year old are much more likely to abuse autopilot and drive distracted/drunk. Only reason those systems aren’t crashing left and right is because drivers are paying attention.
I’ve heard Tesla owners describing autopilot as feeling like being a passenger with a learner driver behind the wheel.
That was my first hand experience also, it feels like I’m the nervous dad gritting his teeth while the teenager does something technically legal and safe but nerve-racking.
>So at 43 when I am driving sober, I'm probably safer not using autopilot. When I was 19 probably safer to use autopilot.
That might work out for you, but if teens think like that _now_ and avoid years of driving experience, they can't expect to be any better at driving at 43 than they are as teenagers, which kind of kills the argument.
* It doesn't find anything to suggest Autopilot is less safe than claimed. It finds reasons to believe Autopilot's improvement on safety relative to humans is less than claimed
* Under both the road adjusted model; and the road+age adjusted model, Autopilot outperforms the average driver on average. Not by much, but its there.
* Everyone here seems to be assuming that the takeaway is that Autopilot is less safe than humans, but there's no evidence for that. If the numbers here are to be believed, I would take it as a good sign for tesla, and by definition an advantage to people who are otherwise poor drivers?
* Question: Why do crashes per million miles have such large swings in the baseline? It's weird that it varies so heavily by quarter without any obvious seasonal patterns to me.
edit: mildly annoyed at getting downvoted for seeming to be the only person here noting that the blue line is on average below the red line on every chart shown.
Perhaps the downvotes are because most people would be apprehensive about putting their life in the hands of an "average" safe driver that's built from software? I probably wouldn't allow any other average safe human to drive me, if the alternative of me driving myself is available.
You never take ubers on the grounds of safety? Because that is, on average, an average human.
Either way, I'm not too impressed. I wouldn't purchase Autopilot personally for similar safety reasons. But let's not twist the stats to suggest they're actually much worse than they are.
This has been obvious for a long time, because if it wasn't true it would be trivial for Tesla to disprove it, and they haven't. Allow me to quote myself (from a discussion about an Autopilot related crash in 2019 [1])
"The fact that Tesla consistently refuses to share full data on identical models driving with autopilot turned on, enabled but not turned on, and not enabled is extremely suggestive that the overall safety rates for autopilot on are not better."
The median driver is a difficult thing to study. 77% of people have experienced an accident, but I think fewer than 50% of people have been at fault for one. That's not to say they wouldn't be at fault if they drove more, but the median has a clean record. Hard to know who to exclude.
That would be cool, but companies are often not forthcoming with the raw data needed to do such analysis independently (particularly if it could make them look bad).
IMO, the headline is a boring and editorialized take on the situation. Even when the data is normalized in this way, active safety only is 55 crashes per 100M miles and AP is ~42 crashes per 100M miles in the most recent quarter.
While the most recent quarter could be an outlier, some other quarters show a similarly dramatic difference.
Yeah no shit. There is only one relevant data point and its teslas that got an accident with autopilot and without. That should spill out the lentgh of the track without accident….. for some reason tesla keeps comparing their high end cars with cars from the 90s……
Insurance companies most probably have all the data at hand, couldn't coverage price reflect the real world risk as seen by someone with something to lose?
I don't pay much more to insure my Model S than my previous car which had less than half the power and was a lot cheaper to fix if things went wrong. Which suggests to me that the insurance companies here, Norway, don't think that Tesla's are unsafe; or at least that they aren't going to cost the insurer a lot of money. And they do have a lot of experience here, 15 % of private cars on the road here are electric, quite a few of them Teslas because they were in the market from the beginning.
It would have been, Musk didn't think so and felt the need to embellish/exaggerate - his habit of exaggerating/lying I think is more on trial than the actual Tesla vehicles here.
Tesla should release demographic info if they have it. IIRC, luxury vehicles tend to have lower accident/fatality rates because their buyers tend to be more experienced, mature, and wealthier.
I'm also wondering about the assumptions the pre-print makes.
For example, it's assumed that Tesla's data has the same characteristics as the SHRP 2 NDS data, and that the demographic survey data from Hardman et al is accurate WRT the SHRP 2 NDS data.
> The analysis in this paper relies on the assumption that the freeway-to-non-freeway and age group crash ratios found in the SHRP 2 NDS are consistent with the manufacturer’s data, as there are no roadway specific nor age-related factors in the manufacturer safety report.
> Although the ages of drivers in Tesla’s crash rate data are unknown, their ages could be estimated from a 2018 demographic survey of 424 Tesla owners (Hardman et al., 2019).
But... It seems like Hardman et al are using California specific data.
> In this study, we used a cohort survey of Plug-in Electric Vehicles (PEV) owners in California administered by
the authors in November 2019. Respondents had been previously surveyed by the UC Davis Plug-in and Hybrid
& Electric Vehicle (PH&EV) Research Center between 2015 and 2018 as part of four surveys in the eVMT
project when they originally bought their PEV. Respondents for the four phases of the eVMT survey were
sampled from the pool of PEV buyers who had applied for the state rebate from the California Vehicle Rebate
Program (CVRP). More than 25,000 PEV owners were surveyed between 2015 and 2018. A total of 15,000 of
these respondents gave consent to be re-contacted and were invited for the repeat survey in 2019. In all, 4,925
PEV owners responded to the repeat survey.
And... The newer data indicates that age could be positively with more Autopilot use and long-distance travel (also correlated with more Autopilot use), which seems like a confound WRT to the pre-print's conclusions and so on.
> Age is negatively correlated, indicating that the lower the driver’s age the higher the odds of reporting more
long-distance travel. This suggests Autopilot induces travel among younger Tesla owners. As with the model
with all Level 2 automation, users’ income is negatively correlated.
This is the real question, because a "perfect" autopilot may be impossible because of cases where it must choose between sacrificing something outside the car vs its occupants and there will be disagreement.
Setting the bar at "perfect" is pretty unrealistic given that humans are already pretty bad.
>Although the ages of drivers in Tesla’s crash rate data are unknown, their ages could be estimated from a 2018 demographic survey of 424 Tesla owners (Hardman et al., 2019).
Seriously? you trying to claim your method is more fair using 424 people in 2019, when Tesla has demographics of millions cars across the globe?
My conclusion:
Author is saying it's not fair to compare with some old car. Tesla was one of the first brands to mass adopt this safety features in every Telsa. This is the only brand that include camera, collision warnings in every, even cheapest modification. And they were the first to introduce many of these features. The rest are catching up. Why Tesla can't be proud of that? They give safety to every customer, no matter if it's the cheapest option or $150k+ car. Only volvo invest on par with tesla in testing facilities, in house extended crash tests ets.
Autopilot and similar system is safer compares to any car on the highway with old cruise control because they have features that force you to be focused. They can stop you on a traffic light, they can react faster on animal running across the road etc. And Yes all measures like collision warnings, notifications, makes your driving safe. Yes, you can use it without autopilot, but I as a consumer don't care. If I'm buying a car, fact that this car is much safer compares to 10 years old ford makes me more confident in purchase.
You can debate a lot on how easy is to avoid some of this safety measures. But if you an idiot you can fall asleep in cruise controlled car as well. Tesla will at least try to wake you up. And will stop on a traffic light.
FSD is a good example of an extension of this rational vs emotional thinking. FSD Beta has a very strict focus control. You look not on a road few times you're blocked. You don't hold a wheel few times you blocked. Etc.
So even FSD beta in it's current state is a complete garbage that quite often literally drives you into accident the way they enforce your attention fixes everything.
FSD beta with this limitations has 60k cars on a road. And 0 major incidents that lead to a major injury or death. Not counting few scratched rims and one time driver got off the road because he over reacted on some maneuver.
It's bizarre how much money legacy auto is poring into efforts to make tesla look bad. Before it was infamous data of burning EV's. But if you look at data a single car brand BMW has so 100 times more risk of fire compares to tesla. You don't see paid articles on Times square how dangerous is BMW.
Right now all major brands invest in EV and suddenly all this fire incidents disappear. And they switched to Autopilot and FSD.
Is there a room for improvement yes. Is the car that can stop on a traffic light, after some fixes for a cop's car, notify about potential collision, avoid automatically some collisions safer. Definitely yes. The rest is just hype, speculations and quite often corruption and paid articles.
Not saying that Tesla shouldn’t be more transparent, but Niedermeyer is famous for how wrong he has been on Tesla on virtually everything. Like many Tesla partisans (not pro and anti) went way beyond rational discourse long long ago.
Also feel they compare Tesla with autopilot vs. other cars. Instead of Tesla with autopilot vs. Tesla without autopilot. As Teslas have way better crash tests but doesn't mean autopilot is safer.
There are valid criticisms of Tesla, this normalized data story is probably one of them, but the fans and anti-fans of Tesla are so equally rabid that it is nearly impossible to have a rational conversation about it.
Expressing a view in either direction results in an instant pile-on of trolls and flames.
So Tesla news can be discussed anywhere except HN?
The reason HN is exceptional is that it has a lot of readers working on or near AI, who will quickly call out the B.S. in Tesla PR. They know this, so all Tesla stories get nuked.
(1) Statistics obtained from Hardmanet al.(2019)
(2) Statistics obtained from Blanco et al.(2016)
(3) Statistics obtained from Transportation Research Board of the National Academy of Sciences (2013)
I don't understand how such chronologically disparate sets of data could produce reliable statistics. The Telsa demographic data (1) was from a "2018 demographic survey of 424 Tesla owners"
Pre-print research based off of unpublished paper/dataset and “personal correspondence” makes this questionable. Why are we sharing this before knowledgeable and relevant reviewers get a hold it?
Cruise-control (including Autopilot) is only applicable to a subset of ideal driving conditions anyways.
You'd make the comparison of cruise-control crash rates vs. general driving in all conditions if you were trying to abuse statistics for marketing your gimmick cruise-control feature.
It's unclear to me how you'd even attempt to normalize the data when you don't have national crash rates for cruise-control only miles in non-Tesla vehicles.
I saw mention that one of the TLA's investigating Teslas crashing into emergency vehicles subpoenaed all manufacturers of vehicles sold w/L2 cruise-control features for their safety data. Maybe that will produce some normalized-enough data for making a meaningful comparison.
An even more serious problem with Tesla's claims regarding the safety of its Autopilot is this:
Autopilot failure statistics are non-ergodic. You don't get to "play again" after a catastrophic Autopilot failure.
So, Autopilot must not be 5x or 10x better than your own driving -- it has to be, like 100x or 1000x better, in order to warrant you risking your life on it.
Trusting an autopilot that is just 10x better than you are?
Insane. It randomly drives under a semi-trailer that you would have easily seen and avoided, and you're dead.
> Crash rates can be adjusted to account for differences in environment and demographics in different data sets. A sample dataset with acrash rateris exposed to some variableiat a different proportion p than the comparison dataset. In the case study, for example, vehicles running Autopilot were driven on freeways (i) 93% of the time, resulting in pi= 0.93. In the SHRP 2 NDS, only 28% ofvehicle mileage was recorded on freeways, i.e pi= 0.28. In the SHRP 2 NDS data, vehicles on non-freeways crashed 2.01 times more often per mile than vehicles on freeways. The observed Autopilot crash rate can be adjusted to reflect national driving ratesto reflect the crash rate that might be observed if 28% of Autopilot mileage was on freeways and 72% were on non-freeways, assuming that the 2.01 ratio holds for Autopilot.
So if humans are 2 times more likely to crash on non-highways than highways, the argument presented here is that the added danger of non-freeways affects autopilot in the same way, degrading its performance. Seems like a far reaching assumption.
If there is no data on autopilot performance on rural roads, then you cannot make a claim either way.
1: https://engrxiv.org/preprint/view/1973/3986
2: https://www.thedailybeast.com/how-tesla-and-elon-musk-exagge...