Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be interesting to think about how it got it wrong. My hunch is that in the "think step by step" section it made an early and incorrect conclusion (maybe even a subtly inferred conclusion) and LLMs are terrible at walking back mistakes so it made an internally consistent conclusion that was incorrect.

A lot of CoT to me is just slowing the LLM down and keeping it from making that premature conclusion... but it can backfire when it then accidentally makes a conclusion early on, often in a worse context than it would use without the CoT.



Maybe it needs even smaller steps, and a programmatic (i.e. multi prompt) habit to always double-check / validate the assumptions and outcomes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: