Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's everyone choice - odd, ironic or appropriate - what to think of how GPT's answers to keeps getting quoted on the question of how to detect it.

The thing is, I'd see these answers as similar to everything else the program produces. A bunch of claims from the net cobbled together - I've read a number of Sci-fi novels and stories where "inability to understand humor" is the distinguishing quality of an AI (I'm guessing it extrapolated "hard create" from "hard to understand"). But that doesn't seem to be playing here where the AI mostly running together things humans previously wrote (and so it will an average amount humor in circumstances calling for it).

A reasonable answer is that the AI's output tends to involve this running-together of common rhetorical devices along with false and/or contradictory claims within them.

-- That said, the machine indeed did fail at humor thing time.



I don’t think it was “intentional” so to speak (not that it has intention anyway, so it isn’t clear what distinction I’m trying to make there). But regardless, I’d say it actually succeeded at humor (the contrast of the “clever wordplay” it describes with the lame example is actually pretty funny).

And the idea that the computer would “try” to come up with an example that would trick a computer is itself a little funny, in that it has fallen into giving itself a preposterous task.

But it did definitely fail at clever wordplay.


>And the idea that the computer would “try” to come up with an example that would trick a computer is itself a little funny

There sure is some obscure discussion forum where users talked about that or some amateur writer that published online something in those lines. ChatGPT is just a statistical device selecting randomly from previous answers.


>A reasonable answer is that the AI's output tends to involve this running-together of common rhetorical devices along with false and/or contradictory claims within them.

The question here is this an actual AI only failure mode. Are we detecting AI, or just bullshittery?


I don't know if bullshittery is the only failure mode but I think it's a necessary failure mode of large language models as they are currently constituted.

I would say that human knowledge involves a lot of the immediate structure of language but also a larger outline structure as well as a relation to physical reality. Training on just a huge language corpus thus only gets partial understanding of the world. Notably, while the various GPTs have progressed in fluency, I don't think they've become more accurate (somewhere I even saw a claim they say more false thing now but regardless, you can observe them constantly saying false things).


Gotta be honest, I wouldn't mind throwing out bullshittery with the AI that much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: