Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Strong AI usually refers to artificial general intelligence. It should be capable of pretty much solving new kinds of problems the way humans can. It's safe to conclude this is not it - no philosophy journal paper is needed.


The question of whether LDA as a method "understands" anything could though.


I think it'd be a short paper. It's hard to imagine a probabilistic model that just measures correlations among tokens without even any real locality sensitivity capturing something that could be called understanding without stretching the meaning of the word to the breaking point.


The question of understanding is a philosophical one that doesn't affect outcomes.

If the output of AGI demonstrates intelligence in finding a solution, whether it had "understanding" or not doesn't matter. The only thing that matters is its power to turn inputs into outputs.

The Turing test crystallizes this WRT human language interaction. Assessment of the intelligence of a machine doesn't depend on understanding, consciousness or any of the other baggage dragged in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: