Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This might totally work and it's kind of impressive if it does. I'm still biased towards ultra skepticism towards all of this since the trustworthiness of all demos like this is completely corrupted at this point due to cherry picking and other deceptive tricks.


If you got an invite for GPT-3, give it a shot. I discounted it at first, but then I gave it a few shots and was actually crept out a bit. Even though it is "randomly" making things up as it goes, it does show what seems like intelligence just from the sheer amount of data it is trained on.

One thing I was amazed by: GPT-3 could be a great autocompletion engine for any programming language or configuration schema. Things like Grub configuration file, xkb file could be intuitively completed by GTP-3. And even more: GTP-3 could build basic "concepts" and apply them to that domain knowledge. This seems to emerge naturally rather than something pre-planned by OpenAI. After all, I don't think OpenAI has planned for GPT-3 to understand xkb keyboard layouts.


Keep in mind that it's somewhere between "random" and "intelligent". It's more or less very complicated fuzzy pattern matching.

I do like the idea of generating configuration files, at least as a starting point for users in applications with big complicated configuration set ups. As with all things fuzzy, the output probably won't be perfect, but it might help users save time in getting set up.


Very complicated multilayer fuzzy pattern matching.

If you look at it that way, it's not dissimilar to the brain.


not really, the brain can verify the correctness of the pattern matching, and use that to infer other possibly correct patterns. also this model can't really infer intentionality or discern between variants and weigh in pros and cons. that being said i think we're not far from agi, we just need a few more pieces


It can sort of discern between variants. And intentionality and pros/cons are just another kind of pattern. What it cannot do is any kind of recursive, reflective reasoning (except by unrolling).


If only we knew how the brain worked…


It's the same with gpt-3 though. All demos show only where it works well. Only when you get to try it yourself do you get to explore all the many areas it fails.


The comment you're replying to literally says the opposite thing.


You make up things randomly as you go. You never thought ahead of any thought. Every thought you’ve ever had is essentially a procedurally generated prompt based on your biased models.


> You make up things randomly as you go. You never thought ahead of any thought.

I don't think that's true.

Usually you need to think a lot about something before coming up with "the right" thought(s).

> Every thought you’ve ever had is essentially a procedurally generated prompt based on your biased models.

That may be. But the interesting part is that those models change as you use them just by using them.


> I don't think that's true.

How long did you have to think to produce that thought? Or did it just pop into your head instantly?

The point is, you cannot think of an upcoming thought, before you have it in your head. Otherwise you would be seeing into the future.

What you are talking about in your comment is reaching a conclusion based on previous thoughts. Yes, often we link our thoughts together into a narrative or a conclusion after we've had the thoughts, but the thoughts themselves? Those seem to come out of nowhere.


The mind does a lot of unconscious work before coming up with some conscious results.

That's true even for very simple things like motion. You can measure things in the brain before those things become conscious thoughts. (Those experiments caused by the way a lot of fuss whether we have free will or are completely predestined in all we do; but that's another topic).

The consciousness only observes a small portion of the thought process. So for it a lot of thoughts seem to come out of nowhere. But the unconscious parts of thinking are very important to the whole process and it's outcomes. I think nobody disputes this by this time.


I love The Darkness that Comes Before, where this observation is explored and exploited, in case you have not read it.


I have used GPT-3 and it works for most of the time. But it fails some of the time too. And thats the problem for use cases like programming or generating config files. Because if you cant trust the output 100% you are pretty much reading the output every time.

So, the only time save is that GPT-3 makes you type less.

In any case I don't type much anyway now a days. Its mostly copy paste from stack overflow update parameters etc.

GPT-3 will be useful, maybe a year from now.


I can't help but think of this scene in Westworld (spoiler S1) whenever GPT 3 (or earlier text prediction models) and this topic come up together: https://www.youtube.com/watch?v=ZnxJRYit44k


That's a pretty bold model for human cognition. It's not something you can just assume.


Sure, but we can think behind our thoughts. And we can sound things out before we say them. And we have mutable long-term memory.

There's not much between us and GPT, but there is some distance still.


The skepticism is warranted for any bleeding edge technology. I wonder if there's another version of a Turing test when a technology can be considered sufficiently advanced when it's indistinguishable from a fake version you've seen in sci-fi. E.g, the Boston Dynamics' dancing robot video (https://www.youtube.com/watch?v=fn3KWM1kuAw) still looks fake to me because it's at the level that I would expect to see from Hollywood CGI rather than a real tech demo. If I saw the video anywhere else but on the BD page, I would have enjoyed it and forgotten about it since it's an average CGI video.


I genuinely don't understand your position. Are you saying a tech demo is only impressive if it can do things that can't be simulated? What can't be shown via simulation or CGI with enough time and money today? If we're limiting ourselves to video there's no interactive component.

Even though that dancing video likely had hundreds of takes, the part that makes it impressive is that it's real. I swear I'm not trying to be disagreeable here - I honestly don't understand your perspective.


I think what the author is trying to say is that if a technology is sufficiently advanced it seems like it can’t be real, meaning it’s something only possible with CGI. So we see these dancing robots, think “just more CGI”, then are astounded when we find out it’s real


Exactly. CGI is just movie magic. And now some real world tech demos are sufficiently advanced to be indistinguishable from CGI/magic.


Hrm. Is uncanny locomotion to modern robotics what uncanny valley is to CGI?

Fun to ponder.


I had to try a few times to get the prompt right, but that's the limit of the cherrypicking. You're correct that it doesn't work nearly as well on more complex, less temporally stable sites like Reddit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: