Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As mentioned elsewhere in this thread, until these systems can recognize that what they are seeing is not trustworthy like humans do on a day to day basis, the systems cannot be trusted or even depended on in a real world setting.


The larger systems can, just because the visual system says one thing doesn't mean it's trusted. Google made a short comment on it recently about cars with stickers of realistic scenes on them.

But in the context of deliberately hidden or altered pedestrians, there's no risk I can see here. The pedestrian would have to be trying to look like something else or hide in a very precise way, and they can do that to regular drivers right now.


The idea that this is only an issue of disguised pedestrians is a red herring that should not stop people considering the broader implications of the fragility of vision and other ML systems. When a system does not always function according to its intended purpose, it is sound engineering judgement to consider whether this has implications beyond the specific cases that have been found, and there have been some tragic outcomes when the people in charge found it expedient to not do so. In the case of ML, the principle that systems can generalize appropriately beyond their training sets is central, and anything that raises concerns over the generality of that capability needs to be taken seriously. You can certainly hold the opinion that it will not turn out to be a major problem, but the burden of proof lies with those claiming that the systems (after modification, if necessary) are safe enough, and avoiding the question is the opposite of discharging that burden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: