Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even if it's legal, I don't think it's a really good idea. It's just going to make it even more bullshitting than ChatGPT.


https://www.youtube.com/watch?v=viJt_DXTfwA

Computerphile did an interview with Rob Miles a few days ago talking about model training, model size, and bulllshittery which he sums up in the last few moments of the video. Numerous problems exist in training that enhance bad behaviors. For example it appears that the people giving input on the responses may have a (Yes|No) voting system, but not a (Yes | No | I actually have no idea on this question) which appears it can create some interesting alignment issues.


Agreed if automated, but frequently ChatGPT gives very good answers. If you know the subject matter you can quite easily filter it, too. I was tempted to do similar just to start my research.

Eg if i get a prompt about something i suspect ChatGPT would give me a good starting point to research on my own, and build my own response.

These days that's how i use ChatGPT anyway. Like an conversational Google Search.

edit: As an aside, OpenAssistant is crowdsourcing both conversational data and validation. I wonder if we could just validate ChatGPT?


Sample 10-20 answers from and existing LM and use them for reference when coming up with replies. A model would remind you of things you missed. Think of this as testing your data coverage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: