• 10 Posts
  • 51 Comments
Joined 29 days ago
cake
Cake day: February 19th, 2025

help-circle
  • I feel like my Toyota, the towing limit on the spec sheet doesn’t mean “after this number, the hitch snaps off.” For legal reasons, I need to state that that is only a guess, and that I have not tested it on anything but private roads.

    For gods sake, the limit should have at least a 25% buffer. 1% buffer is madness. Toyotas, we don’t even know what the buffer is, it’s large enough that I’ve never met anyone who got nerve-wrackingly close to it. I’ve certainly never met a Toyota driver who’s frame or hitch has snapped off. I’ve now seen like 5 Cybertrucks with the whole towing assembly just tumbled across asphalt.





  • That’s part of the story (though, why did Tesla ship a luxury truck with such trash tires?). Weight is a actually good thing in light snow. If you have low traction, you can actually add weight to the truck bed to get better traction. Folks in my neck the woods put sandbags and stuff in their beds to do exactly that.

    Keep your eyes on what the tires are doing in the video. See how they’re rolling at all the wrong times, and then locking up at all the wrong times? That’s an electronic system failing, that’s probably the Automatic Traction Control, and evidently it doesn’t know what the fuck to do with snow.
















  • I am fairly dumb. Like, I am both dumb and I am fair-handed.

    But, I am not pretentious!

    So, let’s talk about your points and the title. You said I had fairly dumb pretenses, let’s talk through those.

    1. The title of the article… there is no obvious reason to think that I think computers think like humans, certainly not from that headline. Why do you think that?
    2. There are absolutely realistic situations exactly like this, not a pretense. Don’t think Loony Tunes. Think 18 wheeler with a realistic photo of a highway depicted on the side, or a billboard with the same. The academic article where 3 PhD holding engineering types discuss the issue at length, which is linked in my article. This is accepted by peer-reviewed science and has been for years.
    3. Yes, I agree. That’s not a pretense, that’s just… a factually correct observation. You can’t train an AI to avoid optical illusions if its only sensor input is optical. That’s why the Tesla choice to skip LiDAR and remove radar is a terminal case of the stupids. They’ve invested in a dead-end sensor suite, as evidenced by their earning the title of Most Lethal Car Brand on the Road.

    This does just impact Teslas, because they do not use LiDAR. To my knowledge, they are the only popular ADAS in the American market that would be fooled by a test like this.

    Near as I can tell, you’re basically wrong point by point here.