Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whether this is "Strong AI" or not is a discussion that might make a good paper in a philosophy journal. Computer science alone probably can't tell you if this method can truly "understand" the text.

Science works by separating out the disciplines. Frankly I think "defense against the terminator scenario" could and should be a scientific field on its own at this point on the level of solutions to global warming.

This is interpreted as a side effect for now. That is, until you tell the computer to do something based on the topics.



Strong AI usually refers to artificial general intelligence. It should be capable of pretty much solving new kinds of problems the way humans can. It's safe to conclude this is not it - no philosophy journal paper is needed.


The question of whether LDA as a method "understands" anything could though.


I think it'd be a short paper. It's hard to imagine a probabilistic model that just measures correlations among tokens without even any real locality sensitivity capturing something that could be called understanding without stretching the meaning of the word to the breaking point.


The question of understanding is a philosophical one that doesn't affect outcomes.

If the output of AGI demonstrates intelligence in finding a solution, whether it had "understanding" or not doesn't matter. The only thing that matters is its power to turn inputs into outputs.

The Turing test crystallizes this WRT human language interaction. Assessment of the intelligence of a machine doesn't depend on understanding, consciousness or any of the other baggage dragged in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: