Re: self-driving cars, accidents
@rysiek @cassidymcgurk what is weird is that how come disengaging not an admission that decisions made by the self-driving component up to that point have lead to an unsafe situtation? like we’d expect a human driver to drive defensively and not intentionally get into a dangerous situation
(test cases and even whole methods to generate such test cases have been proposed in academic literature as well as by some industry players, e.g., to make sure the system can reason about potential object obscured from its vision. so there’s at least an expectation for self-driving components to be programmed this way. nevertheless, tesla – and i presume other implementers – chose the easy way out and blame humans for the shortcomings of their systems)
- replies
- 0
- announces
- 1
- likes
- 2
Re: self-driving cars, accidents
@szakib @trisschen @rysiek @cassidymcgurk given the stringent standards for certifying safety-critical systems (even in the automotive domain, where cost savings are otherwise foremost), this is not surprising: it’s highly unlikely one could devise a way to demonstrate the safety of an autopilot-like system with current system architectures
which, in a sane world, would mean deploying no such systems in production at all