Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been thinking about PG's assertion that more "powerful" languages express things with less code for some time now.

I just can't say that I'm a believer. There are a lot of reasons (perhaps an essay in there somewhere), but the main one is -- it seems to me a fallacy to think that as languages evolve they work with smaller sets of code. Part of the problem is that languages are so contextual: PG's challenge is a salient example of context. If the context is "typical" web programming, the code set is small. How do we define "typical" web programming? Why, the type of thing PG trimmed up Arc to perform! The circle is complete.

I'm not happy with my disagreement, however. I think there's just something intuitively wrong with PG's assertion. Wish I could express my concerns better. His belief obviously fails at the extreme. One can imagine a program that says "do stuff" which then proceeds to solve world hunger or download pictures of Britney Spears. It would be difficult for an observer, however, to determine what "do stuff' actually involved.

Programming is not a solitary sport.



I think that the actual basis for "less code = more powerful" is psychological. You are less likely to want to write a feature if it requires putting up lots of boilerplate code. People tend to do what they want to do (discipline is highly overrated ;-)). So in a language that requires lots of boilerplate, you end up writing fewer features, even beyond the cost of typing (which is negligible for good typists).

Of course, there are other psychological factors at work. You are less likely to work on a project where you have to keep lots of unknowns in your head at once, and adjust them all simultaneously. (I suspect this is why Arc took 6 years to write.) You are not likely at all to work on a project where you can't understand the required language features, or can't figure out the existing code base. And it's no fun working on a project where every time you fix something, something else breaks.

The "ideal language", for me, would be the one that minimizes the sum of all these factors. IMHO, Java goes too far with the boilerplate, eliminating the gains it makes in not having to keep much of anything in your head. Arc makes for inscrutable code bases and requires that you keep the definitions of any macros you're using in your head. Complex JavaScript tends to break when you make small changes. Python hits the sweet spot for me, with liberal helpings of doctest and docstrings.


Oddly enough, there's something a little bit BS-like (EDIT: 1960s-ish) about PG's definition of "powerful". (I know Paul, you hate having to explain everything little thing you say, but bear with me here)

The word "powerful" is such a generic it doesn't work here. I guess the simplest way to explain is to look at what I think Paul is saying: as our languages evolve, it takes less "stuff" to tell the computer what we want it to do.

The problem here is that the amount of "stuff" required is as much a user interface problem as it is a syntactical one. Surely neural interfaces can render most coding obsolete eventually. Perhaps well inside our 100-year timeframe. So what, then, is meant by powerful? Is it our conversation with the computer we are trying to maximize, or our conversations with each other? In other words, am I trying to get to the quickest magic to get from my thoughts to code, or am I trying to get to the quickest magic from my team to a solution that we all understand? In the second scenario, the "power" of a programming language is more about how well it can help the team discover, implement, and maintain solutions that have value. Not about the directness of my personal thought-to-code.

I think this second definition of "powerful" holds up better in the real world. My opinion only, though. I don't have a bunch of essays or a cool venture fund, so take it for what it's worth. I hate to be Mr. Definition Guy, but it's tough to have conversations like this without understanding what the heck we are talking about.


"Surely neural interfaces can render most coding obsolete eventually."

I am skeptical of that. I think that a good programming language can be a great help for making ideas precise and exploring their ramifications. We may have some vague notion of what seems a great idea, but when we go to express it in a precise way find out that there are significant obstacles that were not at first evident. So, the read-eval-print-loop of a good "exploratory" language might still be very useful, even in a world with neural-computer interfaces.


Wouldn't a good neural interface create some abstraction of a read-eval-print loop that would seem natural and not part of some other language?

I mean, don't we do the same thing when we have conversations with other humans? Let's say we're going out and the other person wants some things from the store. Surely we are both capable of discussing what's needed from the store without having to formalize it so much, right?

So I take it that perhaps you feel that machines will always need a more formal conversation than humans? I find that a little difficult to believe, what with machine translation, OCR, voice commands and such. (None of which are perfect, but all of which are getting closer to being very useful)

I guess I would be interested in what part of a neural interface would not be able to provide the stimulus a programmer is already receiving from his programming IDE? And if the neural interface can make it the same stimulus, surely there would be room for improvement, no?

This conversation is continued in a new post -- http://news.ycombinator.com/item?id=109286


Typing isn't the major cost of having lots of extra characters. The cost of having lots of extra characters is the separation of concepts from one another in the code. The closer the important parts are to one another, the easier it is to scan the code looking for defects or to make updates.


It isn't the mechanics of typing, it's the entire paradigm. Why would I need to use Arabic characters in some sort of pseudo-English to represent what I wanted the computer to do? Why not spoken words? Or emotional nuances?

Not trying to go way off the deep end, but I think people make a big mistake when they say that since we have a printed code situation now we'll always have one. I'm already seeing a lot of graphical programming environments. Sure -- they suck to some degree. But each year they suck less and less. It doesn't take a rocket scientist to believe that fairly soon a lot of programming could be done graphically. I would think a spoken conversation, with the computer rapidly prototyping, executing, and testing the concepts as we discussed them, could create some very complex solutions. It's not about typing or the mechanics -- it's the paradigm that's outdated.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: