Hacker Newsnew | past | comments | ask | show | jobs | submit | TulliusCicero's commentslogin

There's a lot of true believers who think Tesla+Musk will crack self driving and/or humanoid robots any day now.

I am so confused when I read things like this because my Tesla model 3 is effectively self driving for me for months now. Hundreds of miles without intervention. No other car I can buy can do this yet

That’s irresponsible at best give it doesn’t support full self driving. I never understood why end users are allowed to just beta test a car on public roads.

Is it responsible to let users do auto speed and auto lane on a high speed highway without other autopilot features ?

Rollout both technologies at scale , and try to guess with one will cause more harm giving th fact there will be users in both cars trying to put legs on a steering wheel :

A stupid tech that will not even try to do safe things

Or software that is let’s say 4x less safe vs avg human but still very capable of doing maneuvering without hitting obvious walls etc ?


Giving people more ways to shut themselves in the foot does not improve the safety. I find the entire thing a kind of dark pattern as the system along with misleading marketing makes you lax over time just to catch you off guard.

You get used with the system to work correctly and then when you expect less it does the unthinkable and the whole world blames you for not supervising a beta software product on the road on day 300 with the same rigour you did on day one.

I can see a very direct correlation with LLM systems. Claude has been working great for me until one day when it git reset the entire repo and I’ve lost two days work because it couldn’t revert a file it corrupted . This happened because I just supervised it just like you would supervise a FSD car with “bypass” mode. Fortunately it didn’t kill anyone , just two days of work lost. If there was the risk of someone being killed I would never allow a bypass /fsd/supervise mode regardless of how unlikely this is to happen.


they have very good guardrails to prevent you that, unlike autolane etc.

Teslas has sensors , eye trackers etc is it possible to shoot yourself in the leg, sure. But not in any different way vs human doing irrational things in the car, make up, arguing , love etc.

Human-being is an irrational create that should not drive except for fun in isolated environment. Tesla or Waymo or anyone else.... It is good to remove human from the road, the faster the better.


>> It is good to remove human from the road, the faster the better.

I’m all for this but not to replace dumb people with dumb software. I think the FSD should be treated more like the airplane safety. We have the opportunity to do this right not just what’s the cheapest way we can get away with it.


well, if you don't read news that try to panic about everything new, that's +- exactly how people currently use FSD.

When I'm driving FSD If i want to drink, eat, etc, instead of doing weird one hand tricks every driver did, i just turn FSD and let it drive. When I'm tired , I'm doing the same. Again , attention control works really good, it doesn't let you sit on the phone etc. unlike many other cars with less advanced features. You can't be on FSD + Phone but you can easily be on the phone + lane control in other car.

Phone is by far the biggest real killer of people, and no body is trying to create a campaign against phone mounts, etc.


The fact other cars are less safe doesn’t automatically make yours safe.

Legally Teslas are Advanced Driver Assistance Systems, while Waymos for example are Automated Driving Systems.

If you're driving a vehicle in the former category, you'll be on the hook for reckless driving if you aren't fully supervising the vehicle.

I'm pretty sure the original commenter was supervising the driving, though.


Except for their limited Robotaxi service. They have recently ditched their safety driver as well, so there is truly no one "driving" the car.


Based on the self driving trials in my Model Y, I find it terrifying that anyone trusts it to drive them around. It required multiple interventions in a single 10-minute drive last time I tried it.

I'm using FSD for 100% of my driving and only need to intervene maybe once a week. It's usually because the car is not confident of too slow, not because it's doing something dangerous. Two years ago it was very different where almost every trip I needed to intervene to avoid crash. The progress they have made is truly amazing.

Would you use FSD with your children in the car? I sure as hell wouldn’t. Progress is not safety.

Yes I do in fact use FSD with my children in the car.

I pray for you and them. You need it

Oh well that's because you aren't using V18.58259a, I follow Elon's X and he said FSD is solved in that update. Clearly user error.

How long ago was that? I doubt it was the v14 software. The software has become scary good in the last few weeks, in my own subjective experience.

It certainly wasn't in the past few weeks, but I've been hearing about how good it's gotten for years. Certainly not planning to pay to find out if it's true now, but I'll give it another try next free trial!

Make sure you are on AI4 hardware when you do. If you buy FSD on AI3 you’ll be limited to v13, which is is terrible. I have used both and they are in different leagues altogether.

This exact sentence (minus the specific version) is claimed every single week.

No, you do not "become scary good" every single week the past 10 years and yet still not be able to drive coast to coast all by itself (which Elon promised it would do a decade ago)

You are just human and bad at evaluating it. You might even be experiencing literal statistical noise.


You need only look at Tesla's attempts to compete with Waymo to see that you are just wrong. They tried to actually deploy fully autonomous Teslas, and it doesn't really work, it requires a human supervisor per car.

They are behind Waymo but they are getting there. They started giving fully autonomous drives since last month without safety driver in Austin. Tesla chose a harder camera-only approach but it's more scalable once it works.

Waymo can go camera-only in the future too by training a camera-only model alongside their camera+lidar model.

They'll probably get there faster too because the decisions the camera+lidar model makes can be used to automatically evaluate the camera-only model.


Why is it more scalable? LIDAR is cheap now.

Clearly at this point the camera-only thing is the ego of Musk getting in the way of the business, because any rational executive would have slapped a LIDAR there long ago.

>more scalable

It's cheaper, that's all it is.


Which makes it easier to scale?

Which using a five-dollar word to describe a one-cent fact.

Scalability is usually about O(n²) vs O(n log n) or something, not a smaller constant that's significant but not a game changer.


Not if they have to have remote drivers ready to help out with the "autonomous" system.

...if it works.

Tesla have recently started introducing unsupervised cars cars as well.

Yes, they moved the "safety driver" into a chase car.

And the results speak for themselves.

https://www.gurufocus.com/news/8623960/tesla-tsla-robotaxi-c...


And seemingly only along one stretch of road? Like, this happened in Dublin in 2018: https://www.irishtimes.com/news/ireland/irish-news/driverles... - going up and down a stretch of road is about as easy as it gets.

> Mr Keegan said he was “pretty confident” that in “the next five to 10 years” driverless vehicles would “make a major contribution in terms of sustainable transport” on Dublin’s streets.

As always, people were overoptimistic back then, too. There are currently no driverless vehicles in Dublin at all, with none expected anytime soon unless you count the metro system (strictly speaking driverless, but clearly not what he was talking about).


A bus crashing into a stationary Tesla counts as a crash for Tesla? What in the world is this metric?

Ask Musk why he refuses to provide details of accidents so we can make a judgment.

Tesla’s own Vehicle Safety Report claims that the average US driver experiences a minor collision every 229,000 miles, meaning the robotaxi fleet is crashing four times more often even by the company’s own benchmark.

https://www.automotiveworld.com/news/tesla-robotaxis-reporti...


I don't see how we could know the rate of US driver minor collisions like that. No way most people reporting 1-4mph "collisions" with things like this.

You don't have to know. You can fully remove the few "minor" accidents (that a self driving car shouldn't be doing ever anyway) and the Tesla still comes out worse than a statistical bucket that includes a staggering number of people who are currently driving drunk or high or reading a book

The car cannot be drunk or high. It can't be sleepy. It can't be distracted. It can't be worse at driving than any of the other cars. It can't get road rage. So why is it running into a stationary object at 17mph?

Worse, it's very very easy to take a human that crashes a lot and say "You actually shouldn't be on the road anymore" or at least their insurance becomes expensive. The system in all of these cars is identical. If one is hitting parked objects at 17mph, they would almost all do it.


You and I must not drive the same Tesla brand then because my Model Y is a terrifying experience when “self-driving” anywhere besides on highways.

I do wonder if folks who say Tesla’s FSD works well and safely are simply lacking a self-preservation instinct.


Even on highways I've had to intervene maybe once every 50 miles as it will often miss exits for me. This is a 2025 Model 3 with the latest 14.2 update in a major US metro.

"No other car I can buy can do this yet"

How many have you tested in your day to day life?


"dude trust me"

Hundreds of miles is not an appropriate sample size for the technology's intended scale.

See this related article and discussion: https://news.ycombinator.com/item?id=47051546


It's a very capable L2 system, it's just that it's been a very capable L2 system for a while now, and it still seems far away from reaching L4.

And of course, Musk's insistence that they don't need other sensor types like lidar or radar definitely looks like it's getting in the way.


Can you watch a Netflix and have a beer while it's driving? No? Then it's not self driving.

Because if you get in an accident you personally not Tesla are liable. Soon as I’m not liable for an accident when the computer is driving I’d sell my other cars and put my family in pink PT Cruisers if those were the only cars offering that

Ask those who were killed while using FSD for their opinion on it before forming your own ;)

The data from their self driving pilots disagrees even if it works for you. Its simply not read to be a taxi that makes money by itself.

It might a nice feature for your car to have. But most people aren't paying for it, the conversion rate is very low.

So they are not making money from taxis and not making much money from software sales.

So does it matter that for you personally it drives you around sometimes?

Even if you price in a 4x increase in FSD buy conversion ratio, you can't explain the stock price.

And I say this as a former Tesla investor who assumed that conversion ratio would be better then it is. But for that reason (and many others) I couldn't justify the valuation and dropped the stock.


Months where you’re still required to be paying attention. Meanwhile 2 years ago Mercedes-Benz Drive Pilot a level 3 system let you sit and watch a movie without paying attention to the road.

Personally that’s way more useful for me even if they didn’t let you turn it on at highway speeds.


Actually Mercedes killed their Drive Pilot for now https://insideevs.com/news/784404/mercedes-level-3-drive-pil...

They canceled it because of poor adoption rather than any technical issues.

Which if anything looks worse for Tesla long term. If luxury car owners aren’t willing to pay 200$/month for self driving then trying to up charge people buying used model 3 and Y’s after canceling the S and X looks dubious. Which means that 100$/month subscription likely loses them money vs an 8k purchase.


Mercedes system was pretty useless because you could only use it in very limited conditions (specific freeways, only following another car). Nobody wants to pay $200/month to use it for 5% of their driving. Tesla FSD drives for you end-to-end.

Most people have a rather consistent commute, so the Mercedes was a more like a 0% or 80% kind of thing. The issue was adding more roads wasn’t going to help, the underlying benefit to attention free driving just wasn’t that valuable even to customers who could use the system regularly.

They are looking to reintroduce it with a much higher top of 81MPH which might help, but agin my issue isn’t with the particular system but the underlying assumption of how much people value attention free driving.


People need to stop with this. The MB system was level 3 on like 0.1% of roads only in 5% of cases when you actually where on that road.

That's kind of like saying 'look this algorithm is awesome' if we feed it all the data in the optimal order.


Meanwhile in China, the humanoid robots are doing Tai Chi and somersaults...

But Tesla doesn't do all this even more and better!

And there are also a lot of people claiming Tesla stock is being manipulated.

"true believers" yup this never changes .

Actually, it kinda is? Who's verifying that all the lessons are teaching actually correct info, instead of bullshit?

The initial moat will be enormous, because it's such a difficult problem.

Eventually, it will become more of a commodity-level task, but by then most of the big incumbents will already be very established.


Yeah, that's basically the opposite of what I'm saying. The problem is enormously difficult, but it won't be solved as a problem in itself, it will suddenly become solved as a side effect of better AI, exactly like numerous cognition-related problems got suddenly solved with the advent of LLMs. And we've seen that there are many players working on better AI, and the main ones all maybe one/ two years away from each other. Self driving will become commoditized much before any of the early players can make substantial amounts of money out of it. Compared to LLMs, it's even worse, as self-driving can't really get superhuman and people will happily settle for whatever works acceptably well (where acceptably means at the same level as a good human driver).

It's already been solved city by city, so I don't know what you're talking about.

Cool, so where can I buy a car in which, wherever I am, I can comfortably sit at the back knowing that it will take me to destination like a good driver? Or that I can summon to come to pick me up?

You can't, because the only actually autonomous cars right now are robotaxis.

Ah, and why so? Why are they introduced city by city?

To verify they can operate safely in any given city, and because you need operations staff and resources in every city for a robotaxi service.

Same reason Tesla is doing it that way.


Okay, so the video is just random predictions? Why should we believe any of this is true?

I regularly bike, which is why I'm hugely in favor of self driving cars; they're way safer for me when biking than human driven cars.


They've already talked about this new hardware many times, this is just announcing that said new models are starting fully driverless operations.

Waymo announcements tend to be very incremental, each one is only a small change from a prior known state. They seem to operate an attitude of least possible surprise, probably to avoid spooking anyone about scary robocars.


Waymo already has a teen mode for this.

Yeah but I mean consumer versions I can use in my semi rural area. I doubt we will have a Waymo taxi service out here anytime soon.

This is about the opposite of reality. Tesla is way behind in actually deploying autonomous vehicles, and other robotaxi makers with real deployments besides Waymo also use lidar.

Yeah millions of miles per day in autonomous driving for Tesla. Yes it's still supervised (somewhat, you don't have to pay much attention anymore) but the technology is absolutely there and just being refined at this point.

Waymos are geofenced, restricted to certain roads, and have remote humans in the loop at lot more than most people assume.


Not actually autonomous until it can be more or less trusted to handle itself, without constant human supervision. Until then, it's just prototype or testing miles.

Tesla's current robotaxi deployment is also geofenced and monitored by humans -- moreso than Waymo, even -- but of course Tesla superfans always conveniently leave that out of the narrative.


I don't think you understand that the level of autonomy that anyone with FSD currently has. It drives itself for hours with 0 interventions. Someone did a coast-to-coast drive with no interventions. Yet yeah somehow I'm just a confused fan and you're not biased by politics?

They will turn of the supervision requirements soon, and suddenly there will be hundreds of thousands of teslas that can drive themselves.

The skepticism is hard to get.


> Yet yeah somehow I'm just a confused fan and you're not biased by politics?

Yes, because you're confusing "can generally make the decisions necessary to drive by itself" with "can be trusted to drive by itself with a non-attentive human occupant".

> They will turn of the supervision requirements soon

Oh, totally; this year, right?

The complete lack of self awareness is absolutely astounding.


Yeah it’ll happen this year. Please mark my words.

Waymos get stuck in weird situations every day. They need intervention, even on their guardrails. FSD is of course not perfect, but yes they have obviously cracked the code (it’s obvious if you use it or get beyond the groupthink here), and are being cautious with turning off supervision requirements. As they should.


> But I wanted something that at least tries to teach you the language instead of teaching you to play a language-themed game.

I'm interested. What's the fundamental difference here, that actually pushes you to learn the language in a useful way?


> The problem with these is always who pays for fraud.

I'm curious how India's UPI handles fraud/refunds, as the system seems to have garnered near-universal praise.


India’s UPI is national service so fraud is “relatively easy” to combat but it depends on banks’ responsiveness.

However, i heard from my Indian friends is that UPI fraud is on the raise and becoming a big challenge.

Edit: UPI fraud rate is similar to CC fraud rate but only about ~6 % of the money lost to UPI fraud has been recovered. If this trend continues (fraud pct continues to grow and recovery rate does not improve) UPI system might get into trouble.

Btw, the stats say that the UPI fraud rate is doubling every year for past few years.


Surely the EU could pull off something similar to what India did with their instant payments program? That system seems to have garnered near-universal praise: https://en.wikipedia.org/wiki/Unified_Payments_Interface

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: