Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It will be: GPT output is usually either garbage, repetitive, very wrong, subtly wrong, or inconsistent in writing style in a way humans aren't. (It sort of starts trying to say something different, in the "same style" as it started with, then switches style later.)

When humans really can't tell the difference, well… https://xkcd.com/810/ seems appropriate.



The GPT output I've been making using openAI's chatpot is indistinguishable from what the bottom 70% of commentators put out.

But an environment glutted with that content is also worthless. I read for the top 10% that has genuinely novel thoughts.


In most cases, judging if something was created by GPT takes quite a long while. It's not a search result you can easily dismiss.



Without revealing the origin, the latest models tested in private betas solve many of the points you mentioned.


This is why I'm not leaking all my discriminating heuristics. https://en.wikipedia.org/wiki/Goodhart%27s_law strikes again.


At some point you also run up against the virus vs immune system endgame: it's evolutionarily inefficient to optimally outwit an adversarial system.

Usually there exists a local maximum where you obtain most of the value with less-than-full effort.

Sadly, the centralization of search directly cuts against that. I.e. outwitting Google search = $$$$ vs outwitting 1-or-4 equally-used search engines = $. :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: