Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
While I agree, let’s not pretend like this is limited just to Tesla. My feed lately has had numerous stories of crazy FSD taxis as well.
I also have to say that one of my concerns with FSD is the deterioration in people’s driving skills and their awareness of their car’s abilities (especially as those change over time). Leaving aside all the wisecracks about people’s normal abilities or not paying attention anyway, let’s take a snowstorm. FSD can’t drive in it, so you’re left with regular human drivers going manual in their cars. But they haven’t actually driven themselves in a while, so they’ve forgotten some of the lessons they learned like how to apply the brakes differently in ice and snow, they don’t know where the corners of their car are, they’re driving entirely too fast and - because their FSD car was compensating for mechanical issues - they’re not aware that their tires are near-bald and the brakes are iffy.
Thing is, I know this is something that’s going to happen. I just don’t know how we can mitigate the risks.
IMHO, Waymo a Cruise AVs are different animals. They have LiDAR. Musk is still hell bent on developing a camera-only system, which is inferior. But it’s cheaper and less bulky, so Musk is all about it.
It seems that both Waymo and Cruise are more likely to already surpass average human driving safety, than not.
I’m really curious on how the next FSD version (which apparently completely relies on neural nets for driving) play out.
Not that I think it’ll be particularily good, just particularily interesting.
While I agree, let’s not pretend like this is limited just to Tesla. My feed lately has had numerous stories of crazy FSD taxis as well.
I also have to say that one of my concerns with FSD is the deterioration in people’s driving skills and their awareness of their car’s abilities (especially as those change over time). Leaving aside all the wisecracks about people’s normal abilities or not paying attention anyway, let’s take a snowstorm. FSD can’t drive in it, so you’re left with regular human drivers going manual in their cars. But they haven’t actually driven themselves in a while, so they’ve forgotten some of the lessons they learned like how to apply the brakes differently in ice and snow, they don’t know where the corners of their car are, they’re driving entirely too fast and - because their FSD car was compensating for mechanical issues - they’re not aware that their tires are near-bald and the brakes are iffy.
Thing is, I know this is something that’s going to happen. I just don’t know how we can mitigate the risks.
IMHO, Waymo a Cruise AVs are different animals. They have LiDAR. Musk is still hell bent on developing a camera-only system, which is inferior. But it’s cheaper and less bulky, so Musk is all about it.
Oh, I completely agree on all points. None of them are ready for full autopilot.
https://arstechnica.com/cars/2023/09/are-self-driving-cars-already-safer-than-human-drivers/
It seems that both Waymo and Cruise are more likely to already surpass average human driving safety, than not.
I’m really curious on how the next FSD version (which apparently completely relies on neural nets for driving) play out.
Not that I think it’ll be particularily good, just particularily interesting.