Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Design Principles Behind Smalltalk (1981) (virginia.edu)
194 points by _zhqs on June 13, 2020 | hide | past | favorite | 87 comments


If you're tired of hearing about the virtues of Smalltalk (like I was at one point) and ask "if it is so good, why isn't it popular?", then watch @deech's talk (Aditya Siram) - "What FP can learn from SmallTalk" - https://www.youtube.com/watch?v=baxtyeFVn3w

I think it is the most accessible explanation of the marvel of Smalltalk, for those who were not lucky to work with it during the late 80-90s.

Also I think Ruby is the mainstream language that is closest to Smalltalk today, with the idea that everything is an object and late-binding as much as possible.

One thing I can't help point out is that Smalltalk / Alan Kay's vision of interconnected objects forming a recursive computing system is not the only "true" vision of OO out there. From Simula thru C++ and then Java and C# also are object-oriented (or class-oriented for those who care about the distinction), but they are statically typed. It is also OO because OO is a word, not a mathematical definition, and when people talk about OO, there is enough similarity between all these variations that we consider all of them to be in the same camp. Plus with the growing move towards strict static typing even in interpreted languages, which I think is driven by actual industry needs and programmer preferences than external marketing, it is becoming a disservice to think of statically typed OO languages as being somehow inferior to the true ideal of object-oriented programming.


Smalltalk prizes interactivity, in the manner that arises from being able to make small, incremental adjustments to code while it (and it's state) remain in memory.

Its rejection of static typing is not axiomatic, merely an expedient way of achieving those interactivity goals. Could the two things be reconciled? I don't know, but I think at least that it shouldn't be unthinkable.

As an aside, static type systems which can be mutated during runtime are astonishingly commonplace. They're very crude, and tend to be called "relational databases", and doing those mutations is a major pain because in spite of all that's elegant about relational algebra, actual SQL is a fucking abomination. (And this is coming from a full-on Postgres fanboy.)

On your comment about the similarity of Ruby, I know what you mean, and yet I regard Ruby as being basically nothing at all like Smalltalk. That's because I'm principally thinking in terms of the development culture, tools, and working practices, and not thinking in terms of the implementation of the VM or the object model, or the names of certain methods.

So similarity between languages is a tricky thing to nail down, not least because in the case of a gestalt system like Smalltalk, "programming language" is actually a tricky thing to nail down.


> Could the two things be reconciled?

Yes, they can, that was the whole point behind StrongTalk.

http://strongtalk.org/

Whose JIT technology eventually became the basis for Hotspot.


At the risk of putting readers off by the title, there's a video essay on YouTube called "Object-Oriented Programming is Bad"[1]. It's both thoughtful and thought-provoking, although I don't agree with the conclusion. Steve Naroff (from Stepstone/NeXT/Apple) did a recent-ish set of interviews with the Computer History Museum[2][3] and referenced that video a bit (and extemporized about objects in small parts throughout). Side note: Naroff and Brad Cox are set to present a paper on Objective-C at this year's HOPL[4]. I wish it were as readily available as Allen Wirfs-Brock's JS paper for the same conference[5][6].

1. https://www.youtube.com/watch?v=QM1iUe6IofM

2. https://www.youtube.com/watch?v=ljx0Zh7eidE

3. https://www.youtube.com/watch?v=vrRCY6vwvbU

4. https://hopl4.sigplan.org/track/hopl-4-papers#List-of-Accept...

5. http://www.wirfs-brock.com/allen/posts/866

6. https://zenodo.org/record/3707008#.XtoRR-TEklQ


The HOPL paper is available here: https://dl.acm.org/doi/abs/10.1145/3386332 (the PDF link even works without an institutional login!)


Thanks a ton! Earlier this week it didn't look like it was available. https://news.ycombinator.com/item?id=23436517

Looks like there's another Smalltalk paper, too:

https://dl.acm.org/doi/abs/10.1145/3386335


Every time you call a method you are breaking encapsulation because you can no longer change the name of the method, it is exposing the internals. Thats why Message Passing has more guarantees with encapsulation.

Abstraction is overdone. Sometimes the simplest implementation is a function. Not a class.

Inheritance is a failed idea of code reuse because it leaves the code brittle.

You can develop GUI apps without OOP - Tcl / tk and some game programming guis are done that way. All that you need are callbacks essentially.

Polymorphism is essentially overloaded functions and you don't need OOP to support that.


Polymorphism is OOP, not every approach to OOP is the same kind of nail.

Classes as concept, are nothing more than extensible modules, and in that regard C and Assembly are pretty much the only mainstream languages that eschew such abstractions.

You can also develop GUIs in Assembly, and I don't miss doing that.

Interesting that you mention Tcl/Tk, given the OOP extensions that were all the rage when version 8.0 came out.

https://wiki.tcl-lang.org/page/Object+orientation


> Classes as concept, are nothing more than extensible modules

If you're talking about ML-style modules, that really can't be farther from the truth, both in theory and how they're used in practice.

Modules can be used (among many other things) to implement abstract data types, which are conceptually a bit easier to compare to objects[0]. Then you'd have to go into structural vs nominal typing, functors vs generics, etc.

[0]https://www.cs.utexas.edu/users/wcook/papers/OOPvsADT/CookOO...


No, I am speaking about modular programming as abstract CS concept, with those nice math diagrams to describe language semantics.


You missed the first point I made. C++ / Java break encapsulation by using methods and pretend they are encapsulated.

Julia implements polymorphism without OOP. Modules are better than classes anyways. The whole Java ecosystem tried so hard to bring modules back as components and failed miserably.

If we are going to talk about all types of interfaces then OpenGL is a clear winner. That last time I checked it uses a procedural interface and vertex buffers directly. Why a procedural interface ? Because OOP is incapable of dealing with any complex data structure other than lists in a performant way.


Modules are classes without inheritance.

C++ and Java don't break anything that a bad designed module won't be doing as well.

OpenGL a clear winner?!? The 3D API that should never haven gotten outside SGI with its global variables and state?


That's exactly why modules are better than classes because they don't have inheritance. It allows for simpler solutions. No debates about taxonomies.

OpengGL is faster. Procedural code is always faster. Global variables are infinitely better than pretending problems vanish with singletons and AbstractFactoryMakerProducer. I remember going from C++ to Java, what a joke. C++ is infinitely better than Java because it doesn't suffer from OOP monothinking which is beyond pathetic. It was the first time I realised programmers would repeat marketing slogans without verification.


Modules can still be generic without inheritance, it is a matter of language and implementation.

OpenGL is a garbage API that I repent having spent so many years dealing with its Frankenstein API design.

Fast? Only if you happen to use the magic incantation of data structures and the driver happens to play ball.

There is a reason why all modern 3D APIs are object based, including Vulkan with its handle model.

OOP based C programming is used all over the place in the Linux kernel, in spite of Linus opinions regarding C++.


Modern OpenGL is also object-based. You select which instance you want to interact with via glBindX.


AZDO is not compulsory and rarely seen live, for example it isn´t supported in ES or WebGL variants.


I found OpenGL easy enough as compared to DirectX which is an abomination. Another problem with OOP, it breaks ABI because of name mangling. Had people just stuck to modular programming without inheritance this would have been a non-issue. Namespaces would have become underscores or something.

You can almost backtrack all the horrible decisions in C / C++ and compare it to wirth languages which tried modular programming, bounds checking ....

> Vulkan with its handle model

That sucks. The performance is going to be slow. It's possible they did not go full OOP because of performance.

> OOP based C programming is used all over the place in the Linux kernel

Nope. They implement interfaces / modules using structs. Again, no inheritance and "classes". The most popular example being drivers and filesystem.


OOP doesn't require inheritance, and structs with function pointers are how "classes" get done in C, better brush up your knowledge of OOP, and while you are at it, learn about Vulkan as well.

Also in case you have missed, all proprietary 3D APIs are either OOP or object based.

I wonder where is the OpenGL C implementation that will outperform LibGNMN.

By the way, one of the reasons why OpenCL failed to gain market shared was its focus in C, instead of the nice C++ libraries provided by CUDA.

When Khronos woke up and decided to offer such APIs, it was to late, now we have OpenCL 1.2 renamed as OpenCL 3.0, and SYCL is being re-designed to be backend agnostic.


From a quick read, Vulkan doesn't use inheritance. Its using the C module approach. Even the Web DOM doesn't use inheritance. It seem in the real world no one uses imaginary objects and everyone uses structs.


Advised lecture, "Component Based Programming".

Again, OOP is not 100% about inheritance.

It seems some people keep not learning what OOP is all about and relate OOP <===> Java.


I think you are finally getting it. Modules ~ Components. Where you and I differ is you think Component = Object. I think the difference is sufficient enough to warrant a complete divorce from the word object.

Linux kernel uses components, not objects.

The ideology of OO is using classes and inheritance to design software. It is an abysmal failure of software engineering, only mediocre managers use OO anymore and pretend visitor pattern is still a thing. All successful OOP projects use components in a disguised form and use a separation between Data (Value Objects, POJOs) and Code (Services). All failed OO projects use classes and take 3 years to deliver even the dumbest of software.


No you are the one not getting it still, components are an area of OOP, usually referred as object based programming.

SELF, BETA and JavaScript don't use classes yet are OOP.

VB originally did not do inheritance and yet it did provide another OOP approach.

Plenty of other languages can be looked upon on SIGPLAN.

Speaking of failed OOP projects, Web DOM, Android, iOS, CUDA, DirectX, LibNMN, LLVM, Metal are indeed such a colossal failure, what were they thinking.

In case you are lacking CS literature I can give you some hints.


That fact that you have to go through so many references to define the word object exactly proves my point.

You can just read this wiki ...

https://en.wikipedia.org/wiki/Modular_programming

Modularity is the very fabric of most engineering disciplines. As opposed to that, OO is a holistic "ideology" about using classes and inheritance to design software. I call it an ideology because despite its abysmal failures people still refer to it in positive terms.

Apart from Android not one software from the list uses inheritance and classes. OO has shown a narrow success in GUI.

In C you don't have modules and interfaces, so what you do is define a struct with pointers. Now you can compile any .c file which follows this interface and dynamically load it into the software.

Python / Perl / Javascript / Ruby ... support this. Remember how much code is resused in the packages downloaded when you do pip install ? Thank modules for that. Not OO. Because of duck typing these dynamic languages don't need interfaces but you can find code snippets that implement it with keywords like pass.

In modular coding you get libraries, extensions and plugins. I find it sad not many people use these terms and prefer to honk OO every time.


Ruby is a joy to use and relatively close to Smalltalk in many many ways.

I wish Ruby had taken Python's spot. When Rails exploded ~2006, I thought it'd become a mainstream language. But sadly it has never been very popular for non-web things, if we exclude Japan.

One reason is probably that the standard Ruby implementation (MRI) was really slow till recently. Smalltalk, on the other hand, had extremely good VMs that served as embryo for the JVM.

I've used Ruby for many big projects (but never used Rails!) and it has scaled really well for large and exotic things such as abstract interpreters.


Speaking of Ruby, there is this book that I read a few years ago (five years ago, in fact – time flies), that was using Ruby when explaining the things it was talking about.

Personally I don’t use Ruby and prefer Python instead, but the book was very enjoyable nonetheless so I would recommend that book to anyone, even to people like me who don’t really like Ruby in general and don’t really want to read or write Ruby in general.

Stuart, T., 2013. Understanding Computation: From Simple Machines To Impossible Programs. O’Reilly.

Book website: https://computationbook.com/


Another good "non-rails" Ruby book is "Enterprise Integration with Ruby"[0]. Unfortunately, it's a bit dated (2006), so I am sure some of what Schmidt details is no longer supported/valid. Still, a cool book IMHO.

0 - https://www.amazon.com/Enterprise-Integration-Ruby-Maik-Schm...


This books looks super interesting. Did you find any of it applicable to particular problems you are working on or was it good general knowledge?


As good general knowledge yes


I didn't give Ruby a second thought after hearing Matz say something along the lines of being a bit sad that people were using it because it meant that he couldn't change it anymore. It was his toy language that escaped his control and now he had to worry about compatability and couldn't just break things. It's not a good sign when the creator didn't want it to be used that much.


I'd be much happier if someone could instead just provide a version of Smalltalk that everyone can agree on.

"Use Squeak!"

"No, Squeak is old! Use Pharo!"

"Pharo doesn't adhere to the Smalltalk standard. Just use gnu-smalltalk in a terminal!"

Never have I seen such a self-defeating community. If the Smalltalk community wants it to be anything more than a historical curiosity, I beg it to form some kind of standards committee and agree on a common implementation so that I can learn Smalltalk without worrying that I'm studying a dialect that is not mutually intelligible to the other implementations.


Gilad Bracha gave an interesting talk (or in his words, "[maybe] the best talk I ever gave") called "Utopia and Dystopia: Smalltalk And The Wider World", where he posits a universe where the Smalltalk community has recognized and proactively fixed this problem, along with some other problems that outsiders see when they look at Smalltalk.

https://www.youtube.com/watch?v=BDwlEJGP3Mk

His blog is also great to follow as a general programming blog, and as it turns out, he wrote a post just two weeks ago where he rehashes some of the content.

https://gbracha.blogspot.com/2020/05/bits-of-history-words-o...


I loved the talk, thanks for sharing it.


The Smalltalk community is an artifact of the pre-Internet days. See Forth or Lisp for the same result. Communication was slow, in the form of publications and you couldn't get your hands on actual code (which probably wouldn't run on any of your machines anyway).

See the many Unix clones before Linux. The difference is that Linux came along when you had GNU code to avoid having to implement ls, ps, etc yourself, you had many people with 386 PCs and you had the Internet to work together without having to be all in the same room (not to mention not having to set up a company to sell CD-ROMs, though other people did that for Linux).

By the time languages like Python and Ruby came along it was a different world and they could be more Linux-like in terms of community and less QNX/Coherent/Idris/BSD/Irix/AIX/Solaris/...

Putting a splintered community back together is not an easy task.

Note that Squeak, Pharo and Cuis deviate from the ANSI Smalltalk standard in the same ways (Traits, {....} syntax for runtime collections).


The Forth situation is a little bit different: it is a language designed to be easily to implement, so everyone is redoing Forth all the time, including its own creator (Cf. Colorforth) who said that "Forth is what Forth programmers do" [1].

At the same time Forth has had an unofficial standard as soon as 1979 (Forth-79 followed by Forth-83) before the official ANSI standard (1994).

Actually, a splintered community is not an entirely bad sign. It means that there is no monopoly and that the sub-communities are wealthy enough that they don't need to make compromises with their siblings in order to survive.

[1] http://www.ultratechnology.com/moore4th.htm


Ah, you mean like a version of Python that everyone can agree on!

Joking aside, it's an entirely understandable sentiment, but Smalltalk is not only an old language, but also one that lives in a symbiotic relationship with its tools (which defy the normal notions of language standardisation).

As things stand, if you want a standard, what you're likely to get is a time-capsule. Pharo in particular does not want to be a time-capsule, and while it's certainly been a source of frustration to me that Pharo has existed in such a state of flux for so much of its life, I understand that the change is necessary, and I remain cautiously hopefuly about the direction they're taking.

I'm not a daily user of Pharo, so while I have a rough idea that it is becomming more settled as time passes, I'm not an authority here.


I had a lot of the same issues and finally started to enjoy using smalltalk once I just gave up on the idea I was going to use it for anything practical. After that I found squeak to be just fine to play around with.

From my perspective as someone learning about it post 2015 is that the really killer “feature” is the ability to modify anything anywhere while the image is running. But this is also the feature that makes it so impractical. You start to realize as you are tinkering how simple and elegant it is and how much modern tooling really gets in the way. But this is all based on it being built up upon itself and all the magic of this sits in an environment that’s so foreign to other tools it always feels awkward to interface with them.



I really like all of Sandi Metz's talks and books on Ruby for that reason. Her Smalltalk background really shines through to bring out the best parts of OO, and are really applicable for just getting better at design and architecture.


OO was the best programming paradigm until people recognized the power of concurrency. Even in the usecase of GUI programming, where OO is supposed to shine the brightest, OO is a failure because interactive programs are extremely concurrent. Even back then when everything was executed in serial GUI frameworks were designed concurrently in crummy languages never designed to support concurrent programming such as C. Or worse, C++.

And functional programming is no good as a general programming paradigm because:

1. Little of computing--almost nothing--is declarative. That means functional programming's non-general by definition.

2. Programming cannot be married to mathematics despite what formal proof proponents and people who think naming their language "Pascal" is a good idea think. It will always be an arranged and sham marriage.

3. Thinking being able to easily execute your declarative programs in parallel means you'll never have to think concurrently and can just outsource everything to Hadoop only means you can't understand the difference between parallelism and concurrency.


> OO was the best programming paradigm

No, it depends on what you're doing. Always does, always will.

> Even in ... GUI programming, ... OO is a failure because interactive programs are extremely concurrent

Seems to work well for me on windows. What's the problem precisely?

> And functional programming is no good as a general programming paradigm because ... Little of computing--almost nothing--is declarative

That's just wrong.

> Programming cannot be married to mathematics...

You give no evidence despite it being a long association.

> Thinking being able to easily execute your declarative programs in parallel means you'll never have to think concurrently

Yes, because concurrency only becomes an obvious issue when state is involved. So FP sidesteps that by not having interacting state. I have some criticisms of FP but this is plain silly.

You're just dissing stuff here. It's annoying.


> Little of computing--almost nothing--is declarative... That's just wrong.

* Artificial general intelligence * Operating systems * Video games * Emulators for hardware * Interpreters * Debuggers * Word editors * Image editors * Graphical user interfaces * Robotics * Web browsers * Servers

Basically everything that makes computing interesting is non-declarative and concurrent. Financial math on K street, parser combinators, and SQL queries are the declarative segment of programming.


OK, good comeback. It's forcing me to be precise.

Declarative/imperative are models, not necessarily implementations (although they often are treated as model x = implementation x). You need to separate the two conceptually, but in implementation you can do both. Anything you can do declaratively you can do imperatively and the reverse.

As such it makes it meaningless to say that almost nothing is declarative because how you choose to create it is a choice. How well it runs is another matter because real hardware is relentlessly state-y, but that's an implementation issue. And that's fine, and sometimes imperative stuff is just cleaner.

But you can write a web browser or a GUI or whatever declaratively, but underneath it will be imperative because that's hardware.

Declarative vs imperative is a choice, not necessarily a 'natural' thing, (depending on the problem, I'd say a web page renderer is distinctly stateless - take some static HTNL, render it, where's the state?). A web server can be stateless too, a request comes in, a web page goes out. I need someone else to speak for others, pretty sure stateless GUIs have been written.

Any big data cruncher that works on petabytes, purely with output coming from input deterministically is effectively stateless.


Nothing is declarative or imperative outside of a certain implementation of a solution. All of the things named trivially have declarative solutions because of Turing completeness. Whether or not a paradigm lends itself easily to something is a bit of a different question, but I think a lot of people tend to say “oh, that’s obviously stateful and imperative” merely because they haven’t thought about the problem in a different way.


Turing machines only model mathematical functions. They cannot model anything on my list. That's why computer scientists invented I/O automata in an attempt to build a theoretical model that handles some of those items on my list.

> “oh, that’s obviously stateful and imperative” merely because they haven’t thought about the problem in a different way.

No I am not a conventionalist.


> Turing machines only model mathematical functions. They cannot model anything on my list.

Since turing machines are turing complete, and existing general purpose CPUs are turing complete, this is trivially incorrect.

> That's why computer scientists invented I/O automata ... those items on my list

wat?


How can a Turing machine simulate concurrency?


It doesn't need to because it's implementation details.

Where it's needed such as in modelling the very important implementation detail of parallelism on multi-core machines to ensure correctness (such as with model checkers like Spin) then stuff like buchi automata are used, but that's dragging maths into the real world to solve real world problems. Turing machines don't live in the real world.

Please just stop.


We know a Turing machine can't model all aspects of concurrency because a Turing machine with threads can compute an integer of unbounded size while there's always a bound on the greatest integer an unthreaded Turing machine can.

But this is all naval-gazing anyway. What's even the point of a purely-declarative OS when it's either gonna:

1. Act as a glorified C preprocessor for an imperative language like Haskell and its do-notation?

2. Look like brainfuck?


Haskell with do-notation is not imperative, that’s a common misconception. It’s a special syntax for working within a monad that desugars to nested function calls.

Typically declarative systems are interesting because it is easier to prove strong properties about them.


I agree. Declarative systems are interesting. I am not against studying, implementing, or understanding declarative OSes from an intellectual viewpoint. Rather, people generally speaking ignore that the declarative perspective is not teleological and doesn't reflect the teleology of an underlying OS.

Imperative programs are so popular among "joe schmoe" average programmers because they have a clear teleology. People are willing to forgive that imperative programs are harder to formally reason about because they make the teleology clearer. But have you seen declarative code? Because it's like math formulas, each function needs the same kind of documentation in order to understand the teleology behind the function.


This is like...not even wrong.


Interesting given that many of the original problems that were tackled with OOP were precisely concurrency and distributed computing.

Even the Xerox manuals contain several examples.


OO has had most success in GUI programming. As for concurrency, Erlang seems to do ok and its creator called it "possibly the only object-oriented language"


> Even in the usecase of GUI programming, where OO is supposed to shine the brightest, OO is a failure because interactive programs are extremely concurrent. Even back then when everything was executed in serial GUI frameworks were designed concurrently in crummy languages never designed to support concurrent programming such as C. Or worse, C++.

I must say that I have trouble reconciling this with the fact that every single GUI program running on my computer right now is written mostly in C++ with a bit in C for my WM and a bit of Rust for Firefox.


Messagepassing and poking around in objects is awesome.

But more from a user developer point of view. Power user.. power users aren’t here anymore


Principle 1 "If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual."

Globally we go in every aspect against this principle going into narrow specialisations. Such a movement is made on a purpose by the flow of money of central printers/powers, and the result is according to the principle - luck of creative spirit of single individuals. The tools which we are learning and the way we are trained is against creativity.

In last 10y perspective is visible how the industry is mislead in wrong directions by the biggest players. So don't ask why we don't use Smalltak-like technology in 40y.


To expand on your comment, the reason why we don't use simple and understandable tools like Smalltalk is that the industry is not interested on this kind of simplicity. For them it is much better to pay thousands of people to work on different, unrelated things and them put these things together at all levels: hardware, OS, software, etc. The resulting system is impossible to understand by a single person, and works only because of the continuous struggle of thousands of people to make it work. But this is positive for the industry behind this architecture: every year they will create new components to fix flaws, introduce new ones, and keep the wheel of the technology industry running.


VPRI (Alan Kay's research institute) had a project called STEPS that's worth looking into. "From the desktop to the metal in 20,000 lines of code." They pretty much did it.


To me Smalltalk has the same kind of beauty to it as C, it's pure and simple. They both allow me to keep the entire language in my head, which means I can spend all my energy on solving real problems.

And the interactivity is awesome once you wrap your head around it. If you haven't experienced it, you most likely won't see the point since you assume much of what it enables is impossible.

I'm still not sold on the image-based dev cycle, Common Lisp allows the same level of interactivity and even more power without giving up files.

They killed it by trying to keep selling it for profit at a time when free alternatives started appearing.


The best Smalltalk is Objective-Smalltalk.

It's a beautiful way to go up and down the ladder of inference.

Basically—Objective-Smalltalk is based on Objective-C which is a superset of C. It's like having the two worlds coming together at the same time.

Does it miss some of the beauty of a totally integrated system like Squeak? Sure. But that's kind of the beauty too. Can we make an Objective-Smalltalk takeover?

http://objective.st


I was recently re-watching the old demo of 280 North's Atlas, an Interface Builder-inspired tool. And when I think about it, it also seems a little strange that object lovers and all today's disaffected Mac fans could pick up the pieces of GNUStep and start building out a usable replacement for themselves as a reaction against the deterioration and inevitable demise of the Mac, but that isn't materializing. There's something, I think, in the Mac developer psyche that says they're responsible for building their own discrete little apps and doing a lot of polishing on them, but they're completely uninvolved in shaping or otherwise working on the environment/platform itself. This is the difference between Mac folks and Smalltalkers.

https://vimeo.com/11486446


> GNUStep ... but that isn't materializing

While I understand the overall sentiment, it's a bit odd as a comment on Objective-Smalltalk which runs on GNUstep. For example, the Objective-Smalltalk website (http://objective.st) is served by an Objective-Smalltalk web-server (heavy lifting courtesy of libmicrohttpd) running on GNUstep on a Linux-based Digital Ocean VPS.

I am also currently involved in a project improving lldb Objective-C debugging with the GNU runtime and just contributed to the GNUstep Catalina compatibility effort.

https://www.gofundme.com/f/gnustep-catalina-compatibility

And of course I also instigated writing a major Web CMS in Objective-C on SunOS/Solaris, AIX, etc. First with GNUstep, later libFoundation, though I think they switched back to GNUstep. Always contributing back.


Necessity is the mother of invention! If the demise of the Mac / Cocoa API's is nigh—expect there to be some work on it.

This guy is doing some interesting work: https://mulle-objc.github.io

One more factor: my guess is that GNUstep picked the LGPL—which although more permissive than the GPL—turns away the money / status seeking Mac folks.


GNUstep being LGPL has as much adverse impact on developers as Apple's implementation being proprietary. (I.e., none.) It's the _LGPL_ after all, not the GPL.

GNUstep has also moved to GitHub and seems to favor the the MIT license nowadays where possible (e.g., libobjc2)[1][2].

1. https://github.com/gnustep/libobjc2

2. http://etoileos.com/dev/licensing/


Smalltalk can be described concisely as a graphical UNIX: a small set of computational mechanisms, designed to be general and interoperate for the construction of larger systems. Unfortunately, the GUI industry was inspired but never decided to use smalltalk itself as its foundation since C/UNIX already existed, providing higher performance for the hardware available at the time.


> Smalltalk can be described concisely as a graphical UNIX

Or Unix as a closet Smalltalk...

"Liberating the Smalltalk lurking in C and Unix" by Stephen Kell

https://www.youtube.com/watch?v=LwicN2u6Dro


What I miss these days is interaction with applications. Yes, rest-APIs are fun and all. And both Apple and Microsoft have their component model out there, but other developers don’t take the time.

That, combined with the Superbad tools of patching things together make it almost useless these days.

It’s unfortunate, because there’s so much you can do with good interoperability.


What if all Smalltalk objects had an URL? It could provide simple (and complex) network services to any computer on the Internet. There is a profound lack of ambition around software and Operating Systems.


Work on this has been done. See Polymorphic Identifiers: Uniform Resource Access in Objective-Smalltalk [1].

1. https://www.hpi.uni-potsdam.de/hirschfeld/publications/media...


In fact, the Objective-Smalltalk site (http://objective.st) is served by an Objective-Smalltalk web server, with the incoming URLs converted the PIs and then dispatched via nested Storage Combinators. ( https://dl.acm.org/doi/10.1145/3359591.3359729).

:-)


> No component in a complex system should depend on the internal details of any other component.

Unfortunately, the internal details of other components are important for performance. It is not enough to know that something is e.g. an ordered collection; I also need to know whether accessing items randomly has an O(1) or O(n) cost. Increasing the complexity from linear to quadratic is not a minor detail that a faster processor would solve.

But when you start caring about performance, suddenly you don't deal with ordered collections, but with arrays, linked lists, red-black trees, and whatever. Then you also get unspecified ordered collections returned from other modules, and you wonder whether it is okay to use them as they are, or whether you should pay a one-time price to convert them to a more efficient collection. And somewhere, a new programmer is crying: "I just wanted to create a list containing dozen numbers, why do I have to deal with all this complexity?"


Always thought the ideas behind lively kernel was interesting - web browser/js engine as the run time, webdav as the persistence layer.

https://lively-kernel.org/

Looks like they are on the third iteration now:

https://github.com/LivelyKernel/lively4-core

https://lively-kernel.org/lively4/lively4-core/start.html

Apparently now using git for persistence.


Boy, I love this stuff.

Two subtle but necessary and principal clarifications.

1. A reference is not "from that time on". The notion of time must be irrelevant here. The reference is immutable until the process (scope) ends, not in time, just ends.

2. Objects are unnecessary. References and the abstraction principle are necessary and sufficient. Duck-typing is abstraction without an object.

This is what Haskell type-classes captured - to be a _____ is to be able to ______.

Greeks and Platonism plagued OO minds.


Very interesting read, though it hurts me to see optimism in this paper mostly unwarranted from today’s perspective.


Was it unwarranted?

> While the presentation frequently touches on Smalltalk "motherhood", the principles themselves are more general and should prove useful in evaluating other systems and in guiding future work.

Indeed the principles of Smalltalk went on to guide the design of many future languages


I liked Smalltalk for some aspects, but I never liked the 'Message to Object' restriction. It dispatches based on the receiver's Class.

I always preferred the Generic Functions of CLOS, that allows capturing the semantics of an operation outside the stranglehold of the Single Object pattern.


If you don't mind dealing with dead projects, you might enjoy Slate as a Smalltalk with multimethods:

https://github.com/briantrice/slate-language


Memento from Internet Archive as of 2007-01-01 16:04:21

http://web.archive.org/web/20070101160421/http://users.ipa.n...


I have never worked with a true Smalltalk. However, I am using a Smalltalk-inspired language pretty extensively, SuperCollider.

I never really dived into a comparison, since I am lacking real Smalltalk knowledge. However, I'd be interested what someone who knows both languages things about SC compared to Smalltalk.


As a specialised music-programming language and audio server, Supercollider outclasses Smalltalk — and most, if not all, other music computing systems. As a high-productivity general-purpose programming language, Smalltalk is considerably more sophisticated and capable.

Supercollider has two distinct components – a backend software synthesizer (audio server) and a frontend combining an IDE and an interpreted language which draws strongly on Smalltalk, but is written in a C-like syntax. The front and back ends (typically, but not always, running on the same machine) communicate using the lightweight OSC protocol. It is perfectly possible to use Smalltalk or indeed any other language as an alternative front end to Supercollider audio synthesis engine.

The Supercollider language (sclang) mirrors Smalltalk very closely in some respects, but also has major differences. As in Smalltalk, Supercollider has a metaclass structure and closures which are used to implement control structures. Supercollider makes extensive use of continuations to flexibly sequence musical materials, but is file-based rather than image-based. Smalltalk has far more sophisticated support for refactoring, debugging and programming. For example, it is possible to inspect, manipulate and restart using live objects in the stack after any exception. Smalltalk has a much simpler syntax, a rich IDE, and a wide variety of specialised tools for browsing, refactoring and inspection. In Smalltalk, the complete source code for the whole system is available at all times. Smalltalk has powerful and well documented tools for reflection and modifying the system or changing the model of computation (such as selectively modifying the doNotUnderstand method - though in fairness this is also technically possible in Supercollider). More generally, Smalltalk parsimoniously exploits a small number of principles with ruthless consistency and clarity, making it relatively easy to learn, understand, and if desired modify, any part of the system. Supercollider has a different aim, to provide the maximum power and flexibility for carrying out music purposes to the highest quality. This sometimes leads to trade-offs between clarity and power, and some frameworks whose details are hard to understand. In their contrasting ways, both are outstandingly engineered, seminal languages.


Wow, thanks for such a detailed answer. I was actually only refering to sclang. I am well aware that the rest (scsynth, and OSC communication) is very much related to audio synthesis. What you tell about SmallTalk somehow reminds me of Common Lisp or even the stories you hear about Lisp Machines. THanks again for taking the time to write up such a verbose reply, I really appreciate that.


Smalltalk became Objective-C.

Objective-C has informed Swift.

Long live Smalltalk.


Each link in this lineage is tenuous. Objective-C took Smalltalk's call syntax, its class def (in a sense?), and its late fn binding, but little else. It certainly didn't take what made Smalltalk, Smalltalk. Full disclosure: I really like both languages.

I struggle to find the mapping between Objective-C and Swift, other than "same company" and must preserve backwards compatibility.

Smalltalk became Squeak. Long live Squeak. :)


I think that Swift got a lot of its "philosophical" structure from ObjC.

After Swift was released, I understood why ObjC 2.0 was written the way that it was. It was modified to allow the standard libraries and SDKs to transition to Swift.

I remember coming from C++ (CodeWarrior/PowerPlant), and encountering ObjC, and going "Whiskey Tango Foxtrot?".

After a while, though, I got used to it, and actually started to enjoy ObjC, in a way that C++ never got me.

I enjoy Swift, even more.

After years of writing code in languages I didn't really like (C++/PHP), Swift is something that I actually enjoy. I suspect that Smalltalk, and its affect, are responsible for the "enjoyment" factor of Swift.

I consider most of the lineage to be "in spirit," more than "in detail."

It's quite easy to shoot holes, if we get particular.


If Swift didn't have a non-negotiable -- because of its deployment on Apple platforms -- requirement for strong interop with ObjC, I firmly believe it would have been even more like Rust than it already is.

The flexibility that's enabled directly by ObjC's simplicity is just not a goal for Swift.


I've never encountered a language as flexible as Swift.


Swift basically stripped out what was left of Smalltalk from Objective C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: