Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is the alternative if the AI model creators/maintainers are not giving the public a mechanism for transparency/accountability to detect public abuse of their systems? The cat is out of the bag and the platforms are going to be widely abused to generate harmful content as it exists currently.


The model creators/maintainers will eventually - and probably already does - include bad actors who will train a model to get around whatever safeguards the big players attach to their models.

This is like asking the New York Times to carefully include fact checking metadata to ensure there's no fake news on Reddit. Sure, you'll have fact checking for the NYT, but it's not the source of the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: