Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On the Design of Programming Languages (1974) [pdf] (ucdavis.edu)
70 points by jruohonen 12 hours ago | hide | past | favorite | 27 comments
 help



He has some good points. This one is from a different paper (Good Ideas, Through the Looking Glass):

Designers had ignored both the issue of efficiency and that a language serves the human reader, not just the automatic parser. If a language poses difficulties to parsers, it surely also poses difficulties for the human reader. Many languages would be clearer and cleaner had their designers been forced to use a simple parsing method.


Who are Wirths, Dijkstras, Hoares, McCarthies and Keys of today? I mean - who represents current generation of such thinkers? Genuinely asking. Most stuff I see here and in other places is about blogposts, videos and rants made by contemporary "dev influencers" and bloggers (some of them very skilled and capable of course, very often more than I am), but I would like to be in touch with something more thoughtful and challenging.

I can't claim to be equal to the greats, but I do run a Discord server where I think and talk a lot about both the philosophy and practice of language design while building tools that I hope will change the state of the art: https://discord.gg/NfMNyYN6cX

Contemporary PL designers who have inspired my programming language design journey the most are people like Chris Granger (Eve), Jamie Brandon (Eve/Imp/others), Bret Victor (Dynamicland), Chris Lattner (Swift / Mojo), Simon Peyton Jones (GHC/Verse), Rich Hickey (Clojure), and Jonathan Edwards (Subtext). My favorite researcher is Amy J. Ko for her unique perspective on the nature of languages. Check out her language "Wordplay" which is very interesting.

Thanks for pointers

very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective

Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.

> theoretical CS is vastly overrated and useless

> as someone who actively studied the field,

Does not compute.

Your comment is mere empty verbiage with no information.


I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.

I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.


Indeed, we don't really need affine type systems, what use could we get for them in the industry. /s

The key, then, lies not so much in minimising the number of basic features of a language, but rather in keeping the included facilities simple to understand in all their consequences of usage and free from unexpected interactions when they are combined. A form must be found for these facilities which is convenient to remember and intuitively clear to a programmer, and which acts as a natural guidance in the formulation of [their] ideas.

We've successfully found some strong patterns for structuring programs that transform data in various ways for the kinds of programs Wirth was imagining. The best patterns have proven themselves by being replicated across languages (for example discriminated unions and pattern matching) and the worst have died away (things like goto and classical inheritance).

There's still work to do to find better languages though. A language is good if it fits the shape of the problem and, while we've found some good patterns for some shapes of problems, there are a lot more problems without good patterns.

I had hoped there'd be more languages for everyday end-user problems by now. At the start of the SaaS era it seemed like a lot of services were specific solutions that might fit into a more general modelling language. That hasn't happened yet but maybe a programming language at just the right level of abstraction could make that possible.


> and the worst have died away (things like goto and classical inheritance)

What's so wrong about classical inheritance, and how it died away while being well-supported in most popular programming languages of today (Python, C++, Java, C#, TS, Swift)?


Inheritance has its uses, but is easily overused.

In a sense, it’s like global variables. About every complex program [1] has a few of them, so languages have to support them, but you shouldn’t have too many of them, and people tend to say “don’t use globals”.

[1] some languages such as classical Java made it technically impossible to create them, but you can effectively create one with

  class Foo {
    public static int bar;
  }
If you’re opposed to that, you’ll end up with making that field non-static and introducing a singleton instance of “Foo”, again effectively creating a global.

In some Java circles, programmers will also wrap access to that field in getters and setters, and then use annotations to generate those methods, but that doesn’t make such fields non-global.


> Inheritance has its uses, but is easily overused.

This I can agree with, but it is far from being "worst pattern". Everything can be like salt.


Yes, but inheritance used to be like salt. That’s why it, like “goto” and global variables, got so much attention.

It's also worth noting that statements like

  for (i = 1; i <= 100; i++) {
    S;
    if (P) {
      break;
    }
  }
are just as bad since `break' (and `continue' and early `return') are a just gotos in disguise.

They are just gotos, but does that mean that they are bad (along with their friend try/catch, who is also a goto?), or does that mean that gotos can be useful when used with restraint?

Gotos get a bad rep because they become spaghetti when misused. But there are lots of cases where using gotos (or break/continue/early return/catch) makes your code cleaner and simpler.

Part of a programmer's job is to reason about code. By creating black and white rules like "avoid gotos", we attempt to outsource the thinking required of us out to some religious statement. We shouldn't do that.

Gotos can be useful and can lead to good code. They can also be dangerous and lead to bad code. But no "rule of thumb" or "programming principle" will save you from bad code.


Yes, break, continue, and return are all "just" gotos in disguise. But they restrict the power of the goto enough to not cause the problems that goto causes while providing a good deal of semantic power to users. Namely, all of these are essentially variations on early return (you can also throw in the logical && and || operators here, albeit they are slightly different in having two exit points rather than one--they're a fusion of if and break, essentially). And it sort of turns that there are a lot of cases where "return when any of these conditions, tested in a particular order, holds" turns out to be the most natural way to express an algorithm, and these goto-like constructs are the most natural way to write them.

(FWIW, this is essentially the argument that Knuth makes in his defense of goto paper)


The argument in the article was that the for loop is (potentially) "lying" and that is still true in my example. Niklaus Wirth's Modula-2 had a LOOP statement in which an EXIT statement could occur anywhere. That statement was at least not misleading. In Wirth's last revision of his last programming language Oberon the loop statement is removed and return is no longer a statement but a clause at the end of a function procedure. This makes Oberon a purely structured language.

https://miasap.se/obnc/oberon-report.html


I think the legend goes Wirth created the Pascal language to be the most easily compilable. To show my age, I recall a class used Modula-2 when I was in college, also from Wirth, very Pascal-like.

I seem to remember (but I can't find the source) that Wirth initially had three aims in designing Pascal:

1. To use it in teaching a structure programming course to new students. As in the late 60's all student programming was batch mode (submit your program to an operator to run, and pick up the printout the following day), this meant the compiler had to be single-pass and give good error messages.

2. To use it in teaching a data structures course involving new data structures worked out by Wirth and Hoare.

3. To use it in teaching a compilers course. This meant the compiler code had to be clean and understandable. Being single-pass helped in this.


Nowadays you can enjoy it on GCC, as it is now an officially supported frontend, after GNU Modula-2 got merged into it.

https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gm2

Even available on compiler explorer to play with, https://godbolt.org/z/ev9Pbxn9K

Yes, that was a common trend across all programming languages designed by him.

That is also how P-Code came to be, he didn't want to create a VM for Pascal, rather the goal was to make porting easier, by requiring only a basic P-Code interpreter, it was very easy to port Pascal, a design approach he kept for Modula-2 (M-Code) and Oberon (Slim binaries).


> most easily compilable

I think it was more that it would be easy to write a compiler for, which meant that CS students could write one. Don't have a source for this that I can remember, though.


I saw on page 25 (the third PDF page) a nice argument against variable shadowing. I can think of a couple of modern languages I wish had learned this ;)

That's an argument against references isn't it? Rather than shadowing.

Related must-read is Wirth's Turing Award Lecture; From Programming Language Design to Computer Construction (pdf) - http://pascal.hansotten.com/uploads/wirth/TuringAward.pdf

Looks like AI slop to me :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: