Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Peek Behind Bret Victor's Lab at YC HARC (limn.it)
177 points by skadamat on July 20, 2017 | hide | past | favorite | 79 comments


What a bunch of of vague, hand-wavey nonsense (the article, not necessarily the HARC lab, I couldn't make heads or tails of what they actually do). Covers up its lack of clear thinking with an attempt to sound profound and obfuscatory in every sentence, like many academic articles. Also full of weird typos and misspellings (e.g. "mid 1970ties", "most currently available imaginaries about technologies").

I love Bret Victor's talks, in them he shows real engineering work and highly original ideas demonstrated with beautiful working prototypes. So don't want to sound like I'm knocking him. But reading this article makes it sound like some fool is footing the bill for endless navel-gazing and philosophizing. Some kind of strange architecture astronaut convention (https://www.joelonsoftware.com/2001/04/21/dont-let-architect...)

It's continuing the ongoing lowering of my opinion of Alan Kay too; the more I hear about him the more he seems like a good bullshitter who happened to write a programming language back in the day. "Kay combined them with ideas about pedagogy, psychology, and mathematics by Maria Montessori, Seymour Papert, and Jerome Bruner, and added further zest in form of the sassy media theoretical speculations of Marshall McLuhan." Wow, world changing stuff; if anyone knew what the hell it meant it'd probably have a big influence on the course of civilization.

"So Kay took all of these ideas, desires, technologies, and opportunities, and recombined them. The results were crucial contributions to a new and emerging sociotechnical imaginary, in many ways representing the computer as a digital medium, which we now have today." Again, sounds really revolutionary, and wonderfully vauge enough that you could say he predicted pretty much everything that happened in the last 30 years. I say: ideas are cheap, especially vague ones.

God bless the real engineers who make things work; photography and digital art were revolutionized with the availablility of programs like photoshop, not pseudo-philosphers bloviating about "representing the computer as a digital medium".


It's important not to let pragmatism wander into anti-intellectualism.

"pedagogy, psychology, and mathematics" are all well-defined terms with non-bullshit meanings. Montessori, Papert and Bruner are well-known names whose theories would be familiar to anyone with a background in learning theory. McLuhan (the "medium is the message" guy) is similarly well-known in media circles, and if you want to understand why Alan Kay thinks he's important, he wrote about it: https://kokonautlabs.wordpress.com/2013/02/27/alan-kay-on-ma...

I'm not saying you're wrong to dislike the article or Kay, but the way that you got there seems to be by saying "they rely on a bunch of bullshit that nobody understands". How are you, presumably lacking a background in the ideas they are talking about, able to determine the difference between good theory and bullshit?

If you want to reason about something, like how people learn, or how technology shapes thought, how would you do that without building a theory about it? How would you build that theory without ideas? How would you refer to those ideas without names? Is your criticism anything more than just that you don't know the names in a field you aren't familiar with?

God bless the real engineers who make things work; programming and the personal computer were revolutionized with the availability of hardware like TTL ICs, not pseudo-engineers bloviating about "structured programming".

But I've seen the code written by real engineers, and they could perhaps use a little less pragmatism and a little more theory. Thinking in abstractions can certainly lead to bullshit, but I'd rather that than no thinking at all.


Agreed, with regard to the quote about Montessori, Papert and Bruner, I don't have the necessary background knowledge to truly evaluate the statement. The terms "pedagogy", "psychology" and "mathematics" are very broad though - not meaningless, but nearly so in this context. If you say somebody combined teaching in general, theories of human thinking in general, and mathematical reasoning in general, that's too broad to really get any sense of what's being talked about, and whether the ideas are valuable or just hot air.

I guess I've come to be suspicious of pure philosophy, or theory divorced from practice. A broad idea can be wonderful when it's used alongside actual practice (like Bret Victor's explorations of immediate visual feedback and interactivity in creative tools and programming tools). But when it stays in the theoretical realm it can get insidious: perhaps designed to impress rather than inform. I haven't deeply dived into Alan Kay's work except for learning a bit of smalltalk (and enjoying it), but everything I've come across since leaves me scratching my head wondering if there's much substance under all the semi-profound and abstract talk.

So I guess I'm expressing my growing antipathy towards pure theorizing divorced from practical implementation. Together they can be wonderful, but if grand ideas are floated around in a vaccum too long they start to stink.


I think there are a few things in play that affect Kay's behavior, since he is a basic researcher:

1. He needs grants. Grand theories and "bullshit" sell at that context.

2. Basic research isn't a tool to gain working systems. It's a tool to gain knowledge, including negative knowledge. And to some extent a fair way to judge a basic researcher is "What have you tried? How seriously did you try it?".

And yes, even though i think Kay is definetly full of himself, He does achieve to do ambitious and interesting basic research(like the VPRI steps program, trying to build a full compute stack ala windows in 10K LOC via DSL's).


Your adoption of a vague, hand-wavey, nonsensical tone as a performative critique of the article was not lost on me, well done. Comments like this show the true potential of the comment section as a media for critical discourse. It seems commentors often forget the style of their words can carry forward an argument just as well as the content. Again, well done. I've taken this as inspiration and a challenge to be more thoughtful and creative with the style of my arguments.


Either this is satire, or you've completely missed his point. There is little-to-no hand wavyness in the response; He flat out describes the article as pretentious, bloviating non-sense.


I never saw the source code of Photoshop but propably uses OOP...



Which was invented by some Norwegians in Simula 67, wasn't it?


Are companies like J.P. Morgan using Simula or Smalltalk?


The lab's list of projects: http://harc.ycr.org/project/


Thanks for the link!

I've seen an early alpha version of GP during the Scratch conference of 2016 (the 2017 conference is wrapping up tomorrow), and it worked awesomely, like a supercharged version of Scratch. A fun thing about it is that it features a very small VM, and the rest of the interface and blocks are written in GP itself, and can be readily inspected and modified.


Of all the orgs to have a live demo, I would have guessed Bret Victor's lab to it. Still, looks pretty interesting.


They're rightfully waiting until they think things are ready to show. Danger of showing early is a recent memory in computing history (see Jobs / PARC visit)


From what I've heard of the Jobs/PARC visit I don't think I'd characterize it that way. In Walter Isaacson's "The Innovators" he says that Xerox had taken an ownership stake in Apple and some higher-ups decided to give them valuable tech.

Also, Alan Kay had a Quora answer on this:

https://www.quora.com/What-was-it-like-to-be-at-Xerox-PARC-w...


You're right and my comment made it seem like everyone has some grudge or something. I think in general the PARC community was weary that the takeaways would be centered around how computers, GUI, etc. were great ways of doing old things better (accounting, reading text, etc) instead of being captivated by computing as a medium for thinking. This happened with print, as Alan has discussed in some of his talks. The first 100 years of print was used to print and distributed religious text and it took a while before ideas that could probably only sprout in the print medium were invented (physics, calculus, some philosophies, etc). To some degree, I think some people feel like we're still in the "use print for spreading religious text" phase (or the "film is a great way of recording & distributing plays" phase.


wow, i didn't know that this is related to the eleVR thing i saw on youtube: https://www.youtube.com/watch?v=F7yLL5fJxT4 (of vihart fame).


Bret's talks outline the general philosophy, motivation, and framework for the lab's work:

- Seeing Spaces: https://vimeo.com/97903574

- Humane Representation of Thought: https://vimeo.com/115154289


I support this and find it really cool. However, I'd love to see less hero worship and more evidence. On its face there is nothing wrong with all those Alan Kay photos, but my feeling is that it's symptomatic of how much of what's being pursued rests on appeal to authority and/or nostalgia.

To make this more concrete. Consider one of their projects (http://harc.ycr.org/project/), the Block-based programming: what's the evidence that this is a superior way for learning programming? What's the evidence that so-called computational thinking enhances cognitive ability or is transferrable to day-to-day thinking?

Furthermore, much of their computer-based education ideas are based on Seymour Papert's research and ideas which in turn was based on Piaget's theories on mental development. Those theories, however, have been thoroughly challenged and is all but outdated.

EDIT: this was the top comment and actively upvoted, now it seems that the mods have placed it at the end just before comments with negative points. I had heard of YC's sensitivity towards any criticism but this seems extreme. Whats the point discussions without criticism.


Several years ago I gave a talk at HAR2009 about my work applying Constructionist Education ideas to SimCity, which explains how I collaborated with the OLPC project to open up and publish the SimCity source code as free software, and translate Papert's and Kay's philosophy into concrete goals and features:

HAR 2009 Lightning Talk Transcript: Constructionist Educational Open Source SimCity, by Don Hopkins.

http://micropolisonline.com/static/documentation/HAR2009Tran...

Chaim Gingold, who also works at HARC, discussed my work open sourcing SimCity and his own work analyzing and using the source code to teach game design, in his PhD thesis on Play Design:

Open Sourcing SimCity - excerpt from Chaim Gingold’s PhD dissertation on Play Design, University of California Santa Cruz, June 2016.

https://docs.google.com/document/d/1DNAvqvKsuLGih8dWz9feEjea...

Abstract: http://pqdtopen.proquest.com/doc/1806122688.html?FMT=ABS

PDF Viewer: http://pqdtopen.proquest.com/doc/1806122688.html?FMT=AI


Got links for crits of Papert / Piaget and modern alternatives? Constructivism at least appears to be live and kicking in the MIT Media Lab hype world

Edit: also on computational thinking, I'm not sure that's what people are arguing for http://worrydream.com/MeanwhileAtCodeOrg/


The main failures of Piaget's work is that it assumes a smooth progression in reasoning facilities with age and secondly, that its notion of formal operations rests too much on deductive logic. But humans do not really reason logically: affirming the consequent and denying the antecedent are fallacies often encountered. However, as the 19th century logician Peirce once stated, Not the smallest advance can be made in knowledge beyond the stage of vacant staring, without making an abduction at every step.

A famous counter-example is an abstract selection task which adults fail with high probability. Yet, when a structurally identical set of rules are given to ~9 years olds but couched in the language of permission, the children are able to pass with high probability. Stating the rules in terms of what the individual has experience with (or altering the linguistic phrasing) significantly increases the probability of solving much more than adjusting the age. So we have that on one hand, people develop capacities sooner (and more unevenly) than his theory supposed and that on another, some stages of reasoning aren't ever reached.

amasad is wrong to imply that the field has not progressed beyond the ideas of Piaget. Although, people do often get the important parts of Papert's ideas wrong by focusing too much on the computer part of computer-based learning. The computer should be a means to an end, a tool to help the learner explore more possibilities, make the abstract more graspable and amplify one's ability to ask better questions.


Even if we assume that all of Piaget’s theories about developmental timelines are bunk, how does that invalidate the concept of teaching constructively in a child-centered, problem-centered way? Those seem like mostly orthogonal concerns.

As for block-based programming: that seems like something we can test empirically, and indeed there’s quite a bit of literature about it (which I have not read and am not familiar with, sorry).


I was directly replying to the asked question. But, it is not just the developmental timeline that was off. There's also its hypothesis of what children learn and how they reason or what they are capable of that is inaccurate.

Child centered, problem centered is vague. More practically, the big things are: you want material that's novel to the learner but also somewhat familiar (this is true, regardless of age). Children's attention is more distributed and worse at blocking out irrelevancies. Therefore, more complex tasks (such as math) should be presented in a way such that the signal meant to be learned can be extracted with minimal ambiguity (less extraneous information in presentation). Few other things have replicated.


Thanks for your replies, which intuitively make sense and appear to come from a knowledgeable background. As in my question I would really like to read the peer-reviewed / field leading work that you're basing your statements on, same as the comment I replied to. Or even just names of people / labs / journals to search.


Yes, please do provide some citations. This is interesting stuff.


I would need to know what the actual task is, but isn't that obvious? If I ask adults "What's 2+2" in a language they don't understand and I ask the same question to children in their native language, the children will look smarter than the adults.


It's nothing at all like that. I already mentioned the Selection task above, it's a very googleable term. It's old and there is yet no theory fully explaining why people fail at it as they do. Pragmatic reasoning schemas are the nearest attempt at an explanation. https://en.wikipedia.org/wiki/Wason_selection_task


by serendipity I clicked through on this link and sgentle's link to Kay on McLuhan and the following sentence comes remarkably close to my experience of the two presentations of the Selection task:

> .. the most important thing about any communications medium is that message receipt is really message recovery: anyone who wishes to receive a message embedded in a medium must first have internalized the medium so it can be “subtracted” out to leave the message behind.

(a tangent, but perhaps interesting)


Thanks for the Peirce reference. Would you recommend any books/papers/videos as introduction to his work?


but looking at just one of their projects is antithetical to harc

from Götz' writeup:

> Simply building prototypes with prototypes would not be a smart recipe for radical engineering: once in use, prototypes tend to break; thus, a toolset of prototypes would not be a very useful toolset for developing further prototypes. Bootstrapping as a process can thus only work if we assume that it is a larger process in which “tools and techniques” are developing with social structures and local knowledge over longer periods of time.

a lot of the ideas, or prototypes, in harc are half finished and or completely abandoned

sometimes an implementation's best contribution is the ancillary knowledge gained by attempting or developing

also, i think kay's presence is less 'hero worship' and more a reminder of shared goals(o) as well as a default standin

Götz's first contribution to a project in the space was to create an animation using cutouts from multiple copies of a picture of kay that were lying around

would you call lenna 'hero worship'?(i)

(o) https://www.youtube.com/watch?v=ubaX1Smg6pY

(i) https://en.wikipedia.org/wiki/Lenna


Prototypes increase understanding and discovery without an undo burden on making something operational. The parent comment strikes me as a, "do you even".

Is it hero worship or having a wise, hardened bad-ass on the team?


Yeah this is spot on. The point of this kind of research is to discover how to frame the problem (we don't really know what it is yet) and create new contexts for thinking about computing. Under the lens of the current open problems in computing, their approach may seem odd, different, not grounded in evidence, etc. But like good art, good, foundational research provides new ways to think about the field entirely.


Bret Victor also gave a presentation at the Santa Fe Institute but I haven't seen a recording or transcript of it.

https://www.santafe.edu/events/words-are-obsolete-explaining...


Yeah wasn't able to find a recording either :(


While I don't think there's a public recording of this version, BVic's talk with the same title "Words are Obsolete" was recorded in 2015[0]. I'm not sure if the demos were the same, but it's something.

[0]: https://vimeo.com/114252897


You're probably right actually.


On the "don't let the prototype out of the lab too early" theme, it would be interesting to read an alternative history novel about a world where the Mother of All Demos never happened. And Von Neumann never spoke about his architecture. Etc.

What are some good examples of prototypes or even mediums in the sense discussed here that were "let out" at "just the right time" or "too late"?


It's definitely an interesting thought. To some degree, it's maybe impossible to avoid the filtering down / reductionism of a big vision, even if you wait until things are polished. I think the hope with the Mother of All Demos was to try to shift people's thinking about computing and try to change where efforts were directed. You could imagine researchers attracted to the vision, working on problems there, and eventually working with commercial partners (or doing it themselves) to solve some of the technical, scale, and other problems that occur when a research vision goes through the filter of reality.


Victor has indicated that his Realtalk group[0] at HARC is trying to release new research by the end of the year - so maybe we won't have to wait too long before learning a little more than this article provides.

[0]https://harc.ycr.org/project/realtalk/


I recently asked Brett:

"It's such a delight to introduce somebody to your work for the first time! I’m mentoring somebody at work in the ways of user interface design, and I just linked him to your classic “Magic Ink” article, and I am indulging myself by reading it again. You’ve written so much since then, that I have a lot of catching up to do. Any suggestions where I should start?"

He referred me to these two articles: "If you're looking for something of mine to read, I'm partial to this one and this one."

http://worrydream.com/LadderOfAbstraction/

http://worrydream.com/MediaForThinkingTheUnthinkable/

And here's a "secret internet video" (oops! ;) which Brett shared, "which doesn't really talk about what we're working towards but maybe gives some flavor of the material we're working with. (You might notice Chaim Gingold in a few shots -- the table with pinball etc. was his project.) Here's a vague description of the current system under development."

http://worrydream.com/oatmeal/realtalk-tech-teaser-2017-02-2...

https://harc.ycr.org/project/realtalk/


Video is gone :(


i was fortunate enough to get a tour of HARC from Götz

after the tour he asked me what i thought

i explained the place looked like my 'play room'

he pressed me, 'what does that look like?'

'an ideal, a sort of messy utopia'


It's good to see that development of such projects with innovative ideas exists and can transform our whole life experience into something completely different like technologies from the 70s did.


This is the first I had heard of Limn, and it looks like a really interesting magazine. What other great non-mainstream tech/design magazines are out there?


I thought so as well, I hope to see more answers to this post.

I don't know if it is mainstream or not, but idn magazine is a great design mage; see here: https://www.google.com/url?sa=t&source=web&rct=j&url=http://....


I think I found a new dream job.


I heard from Alan that HARC is quite open and anybody can just drop by!


I was confused about [and after 250 pages of thinking through a “reactive engine,” it culminates in a “handbook” for an imaginary “Flex Machine”: a first iteration of a set of ideas that culminated a few years later in Kay’s vision for a “DynaBook” (1972)] since as far as I know the Flex was actually built and tested on normal users (who didn't like it, according to Alan).


I was really confused by the repeated use of "imaginaries" in the piece, but a quick Google suggests it's a case of sociological jargon: https://en.wikipedia.org/wiki/Imaginary_(sociology)


Not directly relevant to the article, but I love Bret's previous work in UX and data visualization[1].

[1] https://worrydream.com


I love academics who describe the technical work of engineers. I think this type of translation work is totally needed.


Not a comment on the content of the article, but I find it pretty obnoxious when an author uses a word like "hereodox" which has a much more commonly known nearly-exact synonym "unorthodox". It's hard to imagine any reason for that beyond "look at how clever of a writer I am".


Nearly-exact is definitely not exact. To my ear 'heterodox' connotes difference and pluralism while 'unorthodox' is more of a negation. (Of course other mileages may vary.) But different words always exist for a reason—if they didn't, one would have fallen out of use. Pretentiousness is not enough to keep a word alive!


I wonder if you can offer a pair of online dictionary definitions that illustrates what you see as the difference in meaning between the two words. (I bet you can't)


Look closely enough and you'll see that they all do. One meaning for 'heterodox' is 'holding unorthodox opinions'. That's an attribute of the person, not the opinion.

The royal road to clarifying subtle differences in language, however, is etymology, and there the difference is plain. Hetero = different, dox = opinion, ortho = correct. So the distinction here is something like "diverse opinion" vs. "incorrect opinion", which to me seems clearly meaningful.


> The royal road to clarifying subtle differences in language, however, is etymology

No, it's not. Etymology tells you where a word came from, not what it means (either in denotation or connotation) in current usage. It will often be completely misleading in trying to unpack subtle differences in meaning.


How is heterodoxophobia more nuanced than literally meaning being afraid of different opinions?

http://tcpc.blogs.com/musings/2010/05/homodoxuals-and-hetero...


I couldn't disagree more.


The royal road to clarifying subtle differences in language, however, is etymology, and there the difference is plain.

Is it? (to either of those things). In current English usage, 'unorthodox' doesn't mean 'incorrect' and doesn't connote 'incorrect'.


Etymology be damned: I'm still trying to figure out if he meant "royal" as in "Royale with Cheese" or "royal" as in "royal pain in the ass".


Cool.


There is definitely a difference in meaning, maybe just in specialized fields like theology and economics.

An Austrian school economist, or an Arianist in the early Christian church, would be called "heterodox" but probably not "unorthodox". It has the connotation of following a definite tradition that is not the main tradition in the field. "Unorthodox" means something more like "doing your own thing", so Steven Levitt (the Freakonomics guy) might be called an "unorthodox economist" because he studies unusual topics like drug dealers, but not a "heterodox economist" because he studies these strange topics using mainstream theories.

I think "heterodox" fits reasonably well in this article, it suggests that these engineers are following a distinct tradition, not just "doing their own thing".


Funnily enough, an essay cited by the article contains this quote in its opening paragraph:

"...defining a heterogeneous orthodoxy for the coming Information Age..." [1]

Presumably Victor's research group is working in opposition to this conventional thinking, making them... unheterorthodox?

1. 'THE CALIFORNIAN IDEOLOGY' - http://www.imaginaryfutures.net/2007/04/17/the-californian-i...


The word aside, he's not wrong. That article is filled with flowery non-sense, making long prosaic paragraphs from thin-air. I like reading long articles, but this doesn't contribute much.


Quite possibly! But a bad article can contain a good word. I wanted to stand up for 'heterodox', first because words are my friends and I like my friends, and second because HN is a textual place and good English is part of the business here.


Have you tried deconstructing it?

http://www.fudco.com/chip/deconstr.html

How To Deconstruct Almost Anything: My Postmodern Adventure

by Chip Morningstar, June 1993

"Academics get paid for being clever, not for being right." -- Donald Norman


I respectfully disagree. The article made considerable effort to evade techno-solutionist fantasies.


Nobody? Tough crowd.


At risk of sounding foolish... I even read yours/his comments, and I am still confused.

I am fairly confident that I have never seen the word heterodox used before.


For each of us there are good words we haven't seen before. That's lucky, since life would be duller otherwise.


That's because it means exactly the same thing as "unorthodox" and almost everyone simply says that. Hence the discussion - you aren't alone.


When given the choice of a naming a variable notSomethingDescriptive or somethingElseDescriptive, I always choose the positive name, because it's easier to understand in logical expressions, and reduces double negation. It's better to describe what something is, instead of what it's not. So unOrthoDox is more negative (and more complex) than heteroDox, because it begins with the prefix "un".


You're right, but on the other hand, it's probably not best to use words that few are familiar with if you are trying to make yourself understood. On the gripping hand, a variable name is just a variable name and its true meaning doesn't really depend on its name at all, so okay, go for the burn. (especially if you're writing Java code)


i got to know Götz and spoke to him about his time 'in the field' at HARC

you can look at your complaint from at least two perspectives

first, he is german and english is a second, or possibly third or fourth, language

as the saying goes 'there's a perfect german word for that', but in this case Götz was able to find the perfect word in a second language

second, one of his favourite revelations from his time at HARC was the inhabitants', and i argued many 'creators', nearly obsessive, nearly detrimental, preoccupation with 'naming things'

every new idea, every new prototype, every new function, every new variable needs a name and Götz would watch as the team labored over 'what to call it'

a practice that fascinated Götz and when he recalled it to the room and those of us who had been there confirmed the same preoccupations he erupted in glee

so to call out Götz for naming something exactly as it was meant to be conveyed i think would engender in him a pride of assimilating this ritual


Not sure if I agree with your assessment, but the author certainly loves the word - it appears in the article 5 times. Maybe it's a technical term with a precise, widely used definition in the ethnography field?


I find it obnoxious when a commenter whines about a single word in the first line of a long article. It's hard to imagine any reason for that beyond "look at how good of a complainer I am".


I think he was trying to be ironic by being picky and pedantic about the exact meaning of a word, while spelling it wrong at the same time. I took it as a successful attempt at self deprecating humor, by criticizing "clever writers" while ignoring the squiggly red line his spelling corrector drew under the word he was talking about.


I agree with both of you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: