Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, could you define what reasoning actually means? What would an AI need to do to be considered capable of reasoning? What is the core difference between what we do that is considered reasoning verse what AI currently does that is not considered reasoning?

To be clear, I am not making a statement as to whether AI reasons or not. Its just slippery to say something isn't or can't do X when we can't really define X. Perhaps if we can put it down as an outcome rather than an, in my opinion, currently impossible to accurately define characteristic of a thing.



In many examples, LLMs betray the fact that they are not reasoning, because when provided with problems that can be solved with the ability to reason, they fail.

Even in this discussion someone provided an example of coming up with board game rules. LLMs found all board game rules valid, because they looked and sounded like board game rules. Even when they were not.

In short, You can learn a subject, you can make a mental model of it, you can play with it, and you can rotate or infer new things about it.

LLMs are more analogous to actors, who have learnt a stupendous amount of lines, and know how those lines work.

They are, by definition, models of language.

IF you want a better version - GENAI needs to be able to generate working voxels of hands and 3D objects just from images.


I don’t believe the board game rules example. I think this would be a piece of cake for an llm. I’m happy to be proven wrong here if you share an example.


This is the user I took the example from: https://news.ycombinator.com/item?id=47689648#47696789




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: