Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Charlie Steiner pointed this out 5 years ago on Less Wrong:

>If you train GPT-3 on a bunch of medical textbooks and prompt it to tell you a cure for Alzheimer's, it won't tell you a cure, it will tell you what humans have said about curing Alzheimer's ... It would just tell you a plausible story about a situation related to the prompt about curing Alzheimer's, based on its training data. Rather than a logical Oracle, this image-captioning-esque scheme would be an intuitive Oracle, telling you things that make sense based on associations already present within the training set.

>What am I driving at here, by pointing out that curing Alzheimer's is hard? It's that the designs above are missing something, and what they're missing is search. I'm not saying that getting a neural net to directly output your cure for Alzheimer's is impossible. But it seems like it requires there to already be a "cure for Alzheimer's" dimension in your learned model. The more realistic way to find the cure for Alzheimer's, if you don't already know it, is going to involve lots of logical steps one after another, slowly moving through a logical space, narrowing down the possibilities more and more, and eventually finding something that fits the bill. In other words, solving a search problem.

>So if your AI can tell you how to cure Alzheimer's, I think either it's explicitly doing a search for how to cure Alzheimer's (or worlds that match your verbal prompt the best, or whatever), or it has some internal state that implicitly performs a search.

https://www.lesswrong.com/posts/EMZeJ7vpfeF4GrWwm/self-super...



Generalizing this (doing half a step away from GPT-specifics), would it be true to say the following?

"If you train your logic machine on a bunch of medical textbooks and prompt it to tell you a cure for Alzheimer's, it won't tell you a cure, it will tell you what those textbooks have said about curing Alzheimer's."

Because I suspect not. GPT seems mostly limited to regurgitating+remixing what it read, but other algorithms with better logic could be able to essentially do a meta study: take the results from all Alzheimer's experiments we've done and narrow down the solution space to beyond what humans achieved so far. A human may not have the headspace to incorporate all relevant results at once whereas a computer might

Asking GPT to "think step by step" helps it, so clearly it has some form of this necessary logic, and it also performs well at "here's some data, transform it for me". It has limitations in both how good its logic is and the window across which it can do these transformations (but it can remember vastly more data from training than from the input token window, so perhaps that's a partial workaround). Since it does have both capabilities, it does not seem insurmountable to extend it: I'm not sure we can rule out that an evolution of GPT can find Alzheimer's cure within existing data, let alone a system even more suited to this task (still far short of needing AGI)

This requires the data to contain the necessary building blocks for a solution, but the quote seems to dismiss the option altogether even if the data did contain all information (but not yet the worked-out solution) for identifying a cure




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: