Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is reminding me again of The Bitter Lesson.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html



From that article: "actual contents of minds are tremendously, irredeemably complex".

But they're not. The "bitter lesson" of machine learning is that the primitive operations are really simple. You just need a lot of them, and as you add more, it gets better.

Now we have a better idea of how evolution did it.


I am not a fan of this article. The vary foundation of computer science was an attempt to emulate a human mind processing data.

Foundational changes are of course harder, but it does not mean we should drop it all together.


> The very foundation of computer science was an attempt to emulate a human mind processing data.

The very foundation of computer science was an attempt to emulate a human mind mindlessly processing data. Fixed that for you.

And I'm still not sure I agree.

The foundation of computer science was at attempt to process data so that human minds didn't have to endure the drudgery of such mindless tasks.


Take a look at Turing words in his formulation of the Turing machine and I think it becomes quite clearly the man spent time thinking about what he is doing when he is doing computations.

The tape is a piece of paper, the head is the human, who is capable of reading data from the tape and writing to it. The symbols are discernible things on the paper, like numbers. The movement of the tape ("scanning") is the eyes going back and forth. At each symbol, the machine decides which rule to apply.

Its an inescapable fact that we are trying to get computers to 1. operate as close to how we think (as we are the ones who operate it) and 2. to produce results which resemble how we think.

Abstractions, inheritence, objects, etc are no doubt all heavily influenced by thinking about how we think. If we still programmed using 1s and 0s, we wouldnt be where we are.

It seems incredibly short sighted to me to believe that because a few decades of research hasnt panned out, that we should all together forget about it.


Turing's original paper [1] seems to have no such anthropocentric bias. His description is completely mechanical. Out of curiosity, rather than disputativeness, do you remember where you saw that sort of description? Turing's Computing Machinery and Intelligence paper seems to meticulously exclude that sort of language as well.

You have read me backwards. I am firmly of the opinion that the last few decades of research has entirely and completely panned out. The topic is clearly in the mind of theorists in 1950. But I'm pretty sure early computer architects were more interested in creating better calculators than in creating machines that think.

[1] https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf


There is a ton. What I said isn't much of my own interpretation, its Turing own description of the machine.

The first section of the paper literally says "We may compare a man in the process of computing a real number to machine which is only capable of a finite number of conditions", gives a human analogue for every step/component of the process, continuously refers to the machine as a "he/him", and continuously gives justifications from human experience.

"We have said that the computable numbers are those whose decimals are calculable by finite means. This requires rather more explicit definition. No real attempt will be made to justify the definitions given until we reach § 9. For the present I shall only say that the justification lies in the fact that the human memory is necessarily limited. We may compare a man in the process of computing a real number to machine which is only capable of a finite number of conditions q1: q2. .... qI; which will be called " m-configurations ". The machine is supplied with a "tape " (the analogue of paper) running through it, and divided into sections (called "squares") each capable of bearing a "symbol". At any moment there is just one square, say the r-th, bearing the symbol <2>(r) which is "in the machine". We may call this square the "scanned square ". The symbol on the scanned square may be called the " scanned symbol". The "scanned symbol" is the only one of which the machine is, so to speak, "directly aware". However, by altering its m-configuration the machine can effectively remember some of the symbols which it has "seen" (scanned) previously."

"Computing is normally done by writing certain symbols on paper. "We may suppose this paper is divided into squares like a child's arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares"

"The behaviour of the computer at any moment is determined by the symbols which he is observing, and his " state of mind " at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be '' arbitrarily close " and will be confused."

"We suppose, as in I, that the computation is carried out on a tape; but we avoid introducing the "state of mind" by considering a more physical and definite counterpart of it. It is always possible for the computer to break off from his work, to go away and forget all about it, and later to come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued. This note is the counterpart of the "state of mind". We will suppose that the computer works in such a desultory manner that he never does more than one step at a sitting. The note of instructions must enable him to carry out one step and write the next note."

"The differences from our point of view between the single and compound symbols is that the compound symbols, if they are too lengthy, cannot be observed at one glance. This is in accordance with experience. We cannot tell at a glance whether 9999999999999999 and 999999999999999 are the same"

Taken all this together, I don't think its far fetched to think Turing was very much thinking about the individual steps he was taking when doing calculations manually on a piece of graph paper, while trying to figure out how to formalize it. Perhaps you disagree, but saying its "completely mechanical" is surely false, no?


Above and beyond the call of duty. Point thoroughly made. Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: