Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>In the last year, critics of large NLP models, which are trained on huge amounts of text from the web, have raised concerns about the ways that the technology inadvertently picks up biases inherent to the people or viewpoints in this training data. Such critiques gained steam after Google controversially pushed out famed AI researcher Timnit Gebru, in part due to a paper she coauthored analyzing these risks. Cohere CEO Aidan Gomez says his company has developed new tools and invested a lot of time into making sure the Cohere models don’t ingest such bad data.

So it's censored. No thanks, not interested.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: