Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Holding a Program in One's Head (2007) (paulgraham.com)
213 points by mmphosis on Dec 8, 2012 | hide | past | favorite | 101 comments


It's actually an indictment of our programming tools that they require one to hold so much of the design context in one's head. If they were better (more expressive, easier to interact with) they would help to solve the problems rather than require superhuman feats of endurance.

pg does mention that succinct programming languages help, which is true, but they don't go nearly as far as they could. Like him, I use Lisp for that very reason, but Common Lisp is 30 years old -- why hasn't the state of the art advanced since then? It's really quite shameful.


> It's actually an indictment of our programming tools that they require one to hold so much of the design context in one's head.

No, it's not, because we will keep looking for the limits. Once a programming tool allows you to do alone what you need 10 engineers for now, some genius will use those tools to do what you need 10 engineers for then, and then teams of 10 engineers will try to copy that. We'll always be driven to meet the limits of what fits in one engineers' head.


Agree. This evolution will inevitably play out according to a version of Jevons Paradox. :-)


Does not follow. Holding stuff in our heads is not the only bottleneck in programming. For example, it's not the bottleneck when solving almost any Olympiad style problem, and yet those exams seem to do a good job of stratifying people.


Those types of problems are not programming problems per se, but more like riddles for math inclined. Once you know the solution, the coding tends to be rather trivial with regard to control and data structures.


Olympiad programs are pure algorithms, which I agree are more math intensive than most "normal" programming done today (though I wouldn't dismiss math as "riddles"). But that's my whole point: most problem domains have some mathy parts, whether in the algorithms or from the problem domain. As we reduce the cognitive burden of plumbing and architecture that exists in programming today, these meaty parts will take up a larger portion of the programmers workload. The required skillset will shift somewhat, rather than rewarding the same mental juggling process at a larger scale.


Oh, but they are riddles. Hanoi tower, permutations, travelling salesman kind of problems. Algorithmically tricky (not hard, the problems are in textbooks for ages), but trivial from a programming perspective. The kind of problems that a physics major would think constitutes programming, but mostly irrelevant to complexity met in either the industry or CS academia.

I do some machining as a hobby; there are decades old teasers like turning a cube inside a hollow cube on a lathe from a piece of round stock. I feel like those are very much like Olympiad problems in nature, and just as remote from real-life machining exercises.


I think programming tools and technology have become much better over the decades (more expressive and easier to interact with). Programmers today are building far larger and more complex software than ever before.

Programmers will always be working at the limit of their abilities -- if the tools make anything easier then we'll just start tackling harder problems.


That sounds like a great reason to improve the tools.


> It's actually an indictment of our programming tools that they require one to hold so much of the design context in one's head.

Reminds me of a misconception a vocal student I met had about math. She thought she could understand the attraction of math because the symbols and equations could look pretty. I had to explain that the beauty was not in the symbols, but in the pictures and concepts they could evoke in your mind, much as the beauty of music wasn't in notes on a page, but the sounds they represent.

Feynman once said something about how equations evoked animations in his mind.

This is the value of things like Lisp, Python, Ruby, Smalltalk, Light Table -- by interacting with systems in a tight feedback loop, you can start to form pictures and understandings in your mind. It also shows where those environments and other current tools are lacking. Instead of having to reconstruct relationships in our minds, we should have a way of navigating an explicit diagram of them. Even IDEs like Eclipse, for all their complexity, still require you to read then form relationships in your mind. (It's like we're all still doing word processing with non WYSIWYG word processors.)

http://www.andrewbragdon.com/codebubbles_site.asp


Feynman had a neurological condition called synesthesia [1], which is when senses get mixed together in the same neurological pathway and you can "taste" a sound or "hear" a color. Which is very interesting since Feynman was said to have a "frightening" ease with equations, like an intuitive understanding of what they meant.

[1] http://en.wikipedia.org/wiki/List_of_people_with_synesthesia...


I didn't know that was a thing. I fit this description.


Very interesting! What do you experience?


> Common Lisp is 30 years old -- why hasn't the state of the art advanced since then? It's really quite shameful.

I don't think it's shameful. APL is close to 50 years old now, and it's most recent incarnations (J and K) are 20 years old - and they're still avant-garde, even though the world is slowly catching up to them (with LINQ and friends), although not as elegantly or cleanly.

What exactly are you missing about the "state of the art"? Entire K modules usually fit on one screen; Reportedly, kOS http://kparc.com/ is an operating system in 5 screenfuls of text.


Mumble mumble Clojure mumble simple made easy mumble mumble immutable values mumble concurrency.


I have no idea what you're talking about here with the mumbles; but I can see that you're getting upvoted, so I suppose there's some hidden context here that I'm not aware of. Someone care to clarify?


He's probably referring to the idea that much of the complexity around today is not particularly useful - in particular, that programming with values rather than variables makes understanding a program substantially easier.

Simple Made Easy is a talk by Rich Hickey, the author of the Clojure programming language which encodes a lot of these ideas into the language.

http://www.infoq.com/presentations/Simple-Made-Easy

[EDIT: ambiguous comma :)]



Shuttlebrad is right, but I was mocking the general pattern of proselytization.

I believe in what Hickey's doing though.


Sounds like you have a number of tangible, palpable, feasible ideas for such super-smarter next-gen programming tools -- care to share?

(Just hoping what you have in mind isn't UML + SOAP + some unintelligable über-abstracted-meta-code-gen...)


I have a number of such idea, sadly I'm short on time so I never got to write them down in detail. Besides, my friends don't seem to "get" it so I worry I will waste my time inventing the newest developer tools but not being able to gain any traction due to thinking inertia (or me being a crank - you can never tell :) ).

Briefly, I think I understood how to properly build an interactive data-driven application in the highest level of abstraction. If I were build out the tools, one would be able to create such app by defining data models, view models, derivation logic (for deriving view model from data model), viewmodel-bound layout for individual stages, and stage-linking workflow, all in their respective domain-specific languages. It all looks very neat in my head, and I have even programmed a couple of pieces in my iOS apps to great effect.

Oh well. Back to work now.


I have one: natural language programming.

I have some ideas about how it would work, and have looked a little into how it might be implemented and have some seemingly feasible ideas there too. But I'm not an expert in either NLP or programming language development, so what do I know.

A good first step in this direction is the natural programming language for creating interactive fiction, Inform 7. The problem is that it's not general purpose. But you can try reading about it if you think natural language is too ambiguous to ever possibly be a programming language.

I want to write down all my thoughts about this idea, but I thought I'd get some more meat on it before I do that. Also, I haven't mentioned here why I think natural language programming can help, but I'll leave that to your imagination for now.


I have one: natural language programming.

This exists. It's "Write each step of an algorithm in comments before writing any code, and (only when you're finished writing each step in English in comments) fill in the code each comment represents".

But it's impossible to convince anyone to try that, though, even though it gives all the benefits you are imagining NLP would give.

The mentality "Don't repeat yourself under any circumstances, even if it adds no complexity and increases clarity!" is unfortunately why this is undiscovered. Also, people generally go overboard with the "commenting" part -- the purpose is to be pseudocode in the way Python is executable pseudocode, while retaining the ambiguity that is necessary for natural language to be natural. So it gets a bad reputation, and its power is hidden despite it being powerful.


The closest experience I've had to this is using RSpec and Capybara to do test-driven development.

Here's an actual snippet of a test I wrote before writing the corresponding code for an app that's now in production:

  describe "adding participants" do                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   
    context "when user has created a new event" do
      before :each do
        visit root_path
        click_link 'Get Started'
        fill_in 'Name', :with => 'Tres'
        fill_in 'E-mail', :with => 'tres@sogetthis.com'
        click_button 'Next'
      end
    
      it "should prompt for more participant info" do
        page.should have_field 'Name'
        page.should have_field 'E-mail'
        page.should have_button 'Add Participant'
      end
    
      it "shouldn't let you enter an invalid e-mail" do
        fill_in 'Name', :with => 'Tres'
        fill_in 'E-mail', :with => '#)(*)($*#)($*'
        click_button 'Add Participant'
        page.should have_content 'invalid'
      end
    end
  end
It's been an absolute joy to work this way. I'd love to see this level of abstraction make its way into the mainstream in other languages and environments.


Some Lispers have been doing something similar for some time. There's even a saying that "Any sufficiently well-documented Lisp program contains an ML program in its comments."


Literate Programming - another 30 years old "technology". And it's good. As you say, the problem is not the tools, but rather mentality of rediscovering the wheel at best, and staying ignorant to the history of our field at worst.


I have been known to do that. It is great for writing an initial block of code.

It doesn't work so well 6 months later when you're editing that code.


One way to solve the ambiguity problem of natural languages is using a controlled natural language, which is a limited subset of a natural language[1]. Some of those are machine transformable to logic. There's even some work to encode legal knowledge using them.

[1]http://en.wikipedia.org/wiki/Controlled_natural_language


Thanks, interesting! Not mentioned on that page, but a notable natural language which is similar in its goals is Lojban [1]. It's actually based on predicate language, which is actually what Inform 7 uses as well in order to derive the actual meaning from natural language sentences. It manages this because it is based on a limited subset of English.

But I'll note that we humans manage to understand each other just fine without resorting to a controlled natural language. We solve the ambiguity problem basically by taking the most likely interpretation of a given sentence, given the context, or asking questions if too confused. And in fact, there are natural language parsers out there that can predict the mostly like parse reasonably well for single sentences.

The biggest problem here is that programmers like having full control of what's going on, whereas this sort of idea brings in a large dose of unpredictability: how will the parser interpret this sentence? I think this can be solved to some extent by having the parser tell you what it parsed in less ambiguous terms. In fact, you might ask the parser to present a normalized version of the code that is more specific about exactly what's going on (and probably more verbose), and also expanding out phrases to show you exactly what they mean for debugging purposes. Also, when a clear parse cannot be made, the parser can simply ask you to clarify, presenting the multiple parses that it could not decide between.

[1] http://en.wikipedia.org/wiki/Lojban


I'd just like to point out that Lojban is a constructed language, not a natural language. It was designed and built from the ground up, as opposed to the GP's controlled natural languages which are pre-existing human languages with a bunch of stuff stripped out of them.


This is so cool. Follow-up question:

> There's even some work to encode legal knowledge using them

Where do I learn more about this? What's the name of that language, or research project, or...?


i just did a google search for : "controlled natural language" legal , and got some results.


Hm, I have never really thought about NLP, but I guess maintaining an NLP program somebody else has written must be even more of a nightmare than what we have now...


My take on this is moving the control flow of software to separate, visualizable graphs that you can monitor and manipulate: http://noflojs.org/


* why hasn't the state of the art advanced since then?

The Fred Brooks essay "No Silver Bullet" is a good place to start:

http://www.cs.nott.ac.uk/~cah/G51ISS/Documents/NoSilverBulle...


> It's actually an indictment of our programming tools that they require one to hold so much of the design context in one's head.

Completely true. That is exactly the Bret Victor's point in "Inventing on Principle":

http://vimeo.com/36579366

http://worrydream.com/#!/LearnableProgramming

Programming tools (languages, APIs, IDEs) needs to reduce the problem you need to hold in your head.


In my experience, this is PG's most helpful/ reassuring technical essay.

Recently I was working on/ creating pretty complex algorithms with gargantuan cobweb of edge cases. Not only did I had to work for 15+ straight hours, but in order to maintain productivity fasted (bar caffeine, few nuts & water) every alternate day to keep the steam going.

Incredible times, and rather close to reaping the fruits now, but there is a pretty good chance that I would not have been here had I not read and internalized this essay- was grappling with several decades of CS research though in much less theoretical setting.


i'm not bashing you - i've been there myself. but that same shared experience requires me to ask: dude, how are you going to maintain that?

you need to find a simpler way... if that's how you feel now, a year down the line you're going to hate that code.


Long stretches of uninterrupted thought are required to correctly partition the problem, which is a hard problem as the number of possible partitions is roughly exponential to the number of moving parts. Once the problem is optimally partitioned, understanding the solution is a linear effort. Hence, effort required to understand the problem is not indicative of the effort required to understand the solution. Now, for a new person to understand why this partitioning was preferred to all others requires one to go all the way back, and I worry can never actually be replicated, which is why old software projects often stagnate after the original architects depart.


dude, how are you going to maintain that?

Depends on why it took 15 hours to figure out the problem and make it work. I would presume he (you) learned something during the process, so if it had to be repeated you wouldn't have the same false starts and understand the general solution much quicker. After that, comments should help the coder understand the details like edge cases and non-obvious decisions.

I speak from a recent similar experience where I had to build a fairly complex piece of multi-threaded image processing code on Android. Starting out there was a lot of knowledge about image processing and Android that I simply did not know. Learning and internalizing that information took a significant chunk of time reading the docs and other peoples code. Only then could I fully understand my particular problem and implement a solution.


Warning: controversy.

The text misses the point in that having to hold a program in one's mind is not required to begin with.

The real problem is complexity. There's just so much going on that it's difficult to remember. And complexity is solved through abstraction - separating the problem out into manageable components. That way the programmer works at a system level, at which components are composed into a solution, or at the component level, where one specific aspect is dealt with.

That minimizes the amount of stuff the programmer needs to hold in his head. If you find yourself oscillating between higher and lower levels of abstraction you should probably be revisiting your architecture.

It's called separation of concerns.

No tool improvement is going to solve that. Neither will a rethink of ways of working.


That's begging the question. To come up with the abstractions in the first place requires understanding the whole problem. There's no getting around this.

I can't tell you how many programs I've seen where the wrong abstraction is never perturbed, and as a result all feature development pays a huge cost in reliability and functionality.

To create or change an abstraction, you have to get your mind around all sides of it. That requires A LOT of context. You should always be "revisiting your architecture". It's basically never true that the first architecture works, unless you've written an extremely similar program before. The worst programs result when people are afraid to shift boundaries. That happens a lot in big company development that PG talks about -- each team just accumulates crud on their side of the abstraction boundary, but they never talk to each other so they can globally simplify.

It's true that abstraction reduces complexity, and thus is the only way you can build big programs, but it takes a lot of hard work to get there.


The "whole problem" is simply a process. Processes in the real world are composed of 1-* sub-processes, and ultimately distill down into transformations (input -> transformation -> output).

You don't need to understand every low-level process to make a start.

You raise two interesting points though. Yes, revisiting an architecture is required when you've not solved a similar problem before. Consider though that in this community 99.9% of the time others have solved your problem before. You could reinvent the wheel (which is arrogant), or you could do some research to see which architecture has been successful for others.

Sure, that means doing some work before writing code which seems decidedly unpopular these days. It's a free world.

You also say that it's a lot of hard work. Yes. Nothing worth doing is easy.

[Edit] There's a really good book (there always is) that describes complexity and problem solving better than any other I've found. http://www.amazon.com/dp/0787967653


> Consider though that in this community 99.9% of the time others have solved your problem before. You could reinvent the wheel (which is arrogant), or you could do some research to see which architecture has been successful for others.

The best way to learn something is to come up with it yourself. If you base your architecture on what you read in books or elsewhere, you probably only have a limited understanding of it. And that's especially bad when you're talking about the architecture of a program.

So maybe a better idea in terms of the end result is to first come up with your own solution, implement at least a functioning prototype, and then do some research, to improve upon your idea based on what you have discovered (or throw it away, in the worst case, but of course keep what you've learned doing it).


The best way to learn something is to come up with it yourself.

That's been bugging me, and it's been bugging me because it's not entirely right. It's not entirely wrong either, though.

Given an undirected graph, in which you need to calculate the shortest path between two vertices, would you slog it out or would you just find a shortest path algorithm?

Now move that up a level, from feature implementation to feature design -

If you had to federate identities between two directories, would you hack something together, or would you have a look to see if someone else has done that before? Assuming the latter, you'd probably discover OpenId, OAuth, WS-Federation, and SAML. Would you then use one of these or try to roll something that models their behaviour?

Take that up all the way until you reach the age-old build vs. buy debate, and the connected enterprise and so on. Building something just to understand it is not making much sense to me.


> Given an undirected graph, in which you need to calculate the shortest path between two vertices, would you slog it out or would you just find a shortest path algorithm?

Probably just find an algorithm. This is because my goal was not to learn about shortest path algorithms, rather it was to find the shortest path. Whichever algorithm I chose and however I implemented it is of minimal concern - if it turns out to be bad, I can probably redo it without significantly affecting any other part of the program. I claim that, if, on the other hand, you wanted to learn about shortest path algorithms, it would be better to first at least try to think of a solution or two, then go for the books.

Speaking of software architecture, given the profound impact a solution will have and how disastrous bad choices can get, it should be your goal to learn about whatever you're dealing with.


> If you had to federate identities between two directories, would you hack something together

Right, that "try it yourself" part can get very big and impractical. But the idea still stands: it would be better if you first tried to think how a solution would look like and what it would do, in very general terms. Are the two systems very similar and a solution would be little more than mapping one thing onto another? Or are the systems very different and complex translations would be have to be performed?

Once you do that, you can have a look at any existing solutions, and almost certainly you'll have a better understanding compared to if you didn't do the thinking part first.

It is also important not to switch the order of "thinking" and "looking". If you first look for existing solutions, then your thinking will be biased by what you have seen.


You're describing semantics of moving from a solution concept to an implementation. How you get there is entirely up to you.

What I was suggesting is that a design that separates concerns will obviate the need to keep the entire thing in your head. This is problem solving through the composition of discrete components. AKA managing complexity.


You seem to be in love with the idea that a problem can be understood by decomposing it into sub-problems, and considering them individually.

Where I could agree with you is that a solution to a problem should be understandable in pieces, and then composed out of those pieces. Where I don't agree is that I think piecemeal understanding of the problem is likely to yield suboptimal partitioning of the problem into abstraction pieces. Following your train of thought you will end up with a solution that is locally optimized, but globally inefficient, because the problem was not partitioned correctly. Proper partitioning, I think, requires global understanding of the problem, with enough detail of what matters. The problem with global understanding is that most worthwhile problems do not fit in the head easily, but the solution is that coding pieces of the solution allows a more efficient way of reasoning about the problem, eventually leading to a global, or at least global enough understanding.

In other words, I worry you succumbed to "just design it correctly, and it will work well" fallacy. I call it a fallacy on my own experience, but also on the "Mythical Man-Month" book. I'll give the right quote in a moment.

I might be wrong in how I see your point, but if I am right you will hit some serious problems down the road. I don't think I can convince you now, but it would certainly be worth the effort for you to remember this conversation and think back to it if you do in fact hit the problems.

[EDIT] And here's the promised quote from "The Mythical Man-Month", by Frederick P. Brooks, Jr., 1975, to illustrate the "just design it correctly, and it will work well" fallacy:

---------------

I still remember the jolt I felt in 1958 when I first heard a friend talk about building a program, as opposed to writing one. In a flash he broadened my whole view of the software process. The metaphor shift was powerful, and accurate. Today we understand how like other building processes the construction of software is, and we freely use other elements of the metaphor, such as specifications, assembly of components, and scaffolding.

The building metaphor has outlived its usefulness. It is time to change again. If, as I believe, the conceptual structures we construct today are too complicated to be accurately specified in advance, and too complex to be built faultlessly, then we must take a radically different approach

Let us turn to nature and study complexity in living things,instead of just the dead works of man. Here we find constructswhose complexities thrill us with awe. The brain alone is intri-cate beyond mapping, powerful beyond imitation, rich in diver-sity, self-protecting, and self-renewing. The secret is that it isgrown, not built.So it must be with our software systems. Some years ago Harlan Mills proposed that any software system should be grown by incremental development.

That is, the system should first be made to run, even though it does nothing useful except call the proper set of dummy subprograms. Then, bit by bit it is fleshed out, with the subprograms in turn being developed into actions or calls to empty stubs in the level below.

--------------


I mentioned this below - I've been programming since 1986. It took me the last 7 years to really nail down that this duality of perspective (high-level, complete system vs. low-level, one aspect) was common across all successful projects I've lead or worked on.

By success I mean something that not only meets the stated requirements, but is also maintainable, extensible, secure, scales, and is available.

I'll leave it at that.


Nice idea, but, sorry, it does not work as well in real life as it does in theory. :-(

In any case, everybody is talking as if holding a program in your head is a huge problem. It really isn't. All it means is that particular individual understands the problem (and the solution) deeply and comprehensively. That sounds like a good thing to me. It is true that communicating that level of understanding can be challenging, which is true of most domains. What is special about software development is the degree to which we attempt to communicate that understanding.

Well, why do we do that? Do we really need to communicate that understanding when it is sitting happily in that developer's head? Instead of moving the mental model from one developer to another, trying to squeeze that rich, internal representation through the band-limited drinking-straw that is the human sensory suite, can we just move the developer himself, mental model and all?

The problem really arises because of all those preconceptions that we have about how team-working and large organizations should function, about how people should be replaceable cogs: jelly-beans that can be discarded at will.

Do you like being a cog?

Do you like working for organizations that treat people that way?

I do not.

Instead of trying to dehumanize our profession, can we not re-humanize it? Instead of talking about re-using code, why do we not talk about re-using developers? Recognizing that developing code is as much about developing knowledge and expertise in the head of the developer as it is about herding electrons through tiny pieces of semiconductor. We should value that learning, that knowledge and expertise, and seek to maximise the return from it, rather than hanging on to the (inappropriate, perverse, and just plain wrong) idea that humans are substitutable parts in a machine.

Write code, yes. Write documentation, yes, but recognize and acknowledge the unavoidable truth that a significant part of your intellectual capital lives, not on pieces of paper, nor on magnetized platters, but inside somebody's head. Consequently, when you seek to maximize the return on your investment, seek to build on your developer's expertise by redeploying him or her to use the same expertise to solve as many related problems as possible.


Interesting. I've been writing code for 26 years. For the last 12 of those I've been an architect where my job is to identify, design and often implement components of a solution, their attributes, their relationships , and attributes of those relationships.

Projects I've worked on range from Nokia Maps for Windows Phone to a GB£12M healthcare system. I'd say I have a fairly excellent understanding of what works in theory, and what works in reality.

There is no interest or intention to de-humanise anything. In fact I swing pretty hard the other way. Consider a presentation I gave on the human aspects of architecture - http://www.wittenburg.co.uk/Entry.aspx?id=d5002929-97b2-4902....


What about Haskell? The type system tells you exactly what's going on.


Some tools are better than others. I'm not familiar with Haskell but can tell you that in the .NET world LINQ is bad, because it forces the melding of process components with data access logic.


Haskell has it's own warts of course, but the functional straightjacket you're forced into makes it very easy to understand the scope of a computation.


Holding a large program in your head is like holding a city in your head. You can't entirely do it. What you can do is to know broad outlines and principles, so you can navigate to the relevant points and negotiate difficult parts. When you see the details at the relevant places, then you can deal with them. It's like being a cab driver. You don't memorize every cobblestone and pothole, but you know how to get where you need to go you can deal with them as you're driving.


You can visit the neighborhoods often enough that they are quickly familiar when you happen upon them again. That's called a domain expert.

In fact, London cab drivers DO hold the city in their head. They are required to, before they get their license.


Yes, I was thinking about what I read about London cabbies. However, they don't necessarily have recall of every pothole and cobblestone. They remember enough of a map so they can get to where they need to be, and deal with what's there on the way.


For years now I have been arguing with other programmers on the merits of IDE features such as intellisense. My position has always been that they are completely unnecessary. Autocomplete is useful, sure, since it saves some typing, but that is it's primary goal - to save keystrokes, not to help you remember what a class is capable of doing.

Many programmers appear to get caught up in the small picture way of thinking, where all that they consider important is the code they are currently working on. The fact of the matter is that a piece of software is an ecosystem. Every part of it is directly or indirectly tied to every other part. It is only when we consider the system as a whole that we can create an elegant architecture. This is simply not possible if things are always seen as units and their relationships are an afterthought.

Thank you for this link. I shall be using it to add fuel to any future arguments along these lines.

EDIT:

Clearly, it takes a lot of effort and skill to pull this off. The bottleneck becomes the human. The logical question that follows is 'how do we make the human more efficient and capable of remembering more?'. The answer is diet, nootropics, meditation, exercise and knowledge of techniques (e.g. how to memorise facts rapidly). I won't go into specifics but suffice it to say that most people are undernourished and are mentally impaired because of it.


>>>> "The answer is diet, nootropics, meditation, exercise and knowledge of techniques (e.g. how to memorise facts rapidly). I won't go into specifics but suffice it to say that most people are undernourished and are mentally impaired because of it."

Dude, if you have some concrete insights in these areas, you definitely should go into specifics. A lot of us would be very interested.


My apologies, I thought that it was outside of the context of the conversation.

A surprisingly high percentage of people have nutritional deficiencies. For example, an alarmingly high percentage of people in the US are deficient in Magnesium, an essential nutrient (meaning that your body cannot manufacture it).

http://en.wikipedia.org/wiki/Magnesium_deficiency_%28medicin...

> 57% of the US population does not meet the US RDA for levels of magnesium

And we're talking about a first-world country here! Magnesium is essential for a healthy stress response, and, as we probably know from experience, many people struggle with the stresses of day to day life. Anybody who regularly consumes alcohol, tobacco or caffeine is likely to be deficient as those drugs rapidly deplete Mg reserves in the body.

That is just one example. Another is Calcium, which 75% of people have deficiency of (http://www.livestrong.com/article/365193-heart-disease-cause...).

The body also becomes less efficient as time goes on. Have you ever wondered by old people tend to be more grumpy than everybody else? Because our serotonin (a neurotransmitter which is implicated in mood and irritability) levels fall as we age - http://www.pslgroup.com/dg/4098E.htm

One way to offset the brain's natural decline of neurotransmitters is to supplement with the precursor amino acids which are used to manufacture those neurotransmitters. For example, L-Tryptophan, an essential (again, meaning that you cannot make it) amino acid is rapidly absorbed by the body and is used to make more serotonin (among other things). The difference between protein and straight amino acids is that the latter are ready for use, whereas proteins, which are composed of amino acids, need to be broken down to their constituent parts before the body can use them.

Another age-related neurotransmitter decline - dopamine. A lot of older people have had some success with using D,L-Phenylalanine and L-Tyrosine to boost dopamine levels and reclaim their sex drive.

Healthy acetylcholine levels are also essential for cognition and can be boosted with things containing Choline (eggs are an excellent source).

GABA levels are very important for short term memory and focus. For example, anyone who drinks coffee will have experienced that overexcited state where you have a mountain of motivation but lack proper focus. Taking L-Theanine (Green Tea also contains this), an amino acid which causes increases GABA levels will restore the focus. I never drink coffee without L-Theanine for this reason.

Taking L-Glutamine prior to drinking alcohol will prevent a hangover. It will also stop alcohol and sugar cravings. This is because the body is capable of using L-Glutamine as a source of energy.

But the most important thing is how all of these neurotransmitters interact with each other. Some of them are polar opposites, some modulate the release/inhibition of others. The brain is always striving for balance (homeostasis). All of these neurotransmitters, nutrients and minerals are vectors which pull it in one direction or another. Keeping them in balance is the key to feeling good. Having a model of their interactions makes it considerably easier to debug problems and fix them. This comes with experimenting with your body and gaining an intuitive understanding of what's what. I think that it would be difficult to come up with a generalised approach due to people's baseline levels of each neurotransmitter being different.

Note that even popular multivitamin brands sometimes contain too little of a given nutrient. I recently bought a doctor-recommended one and it barely contains and magnesium or calcium. It pays to educate yourself and not to completely defer your health to someone else. It's your body and it's essentially your problem.

After addressing deficiencies, I found that I felt a lot better and that my rate of recovery from stress, a night of drinking or strenuous physical activity had improved considerably.

Another important factor is removing things which cause harm. E.g. alcohol, in sufficiently high doses (high enough to be drunk), is neurotoxic, meaning that it directly harms the brain. One of the primary mechanisms of alcoholism is that the damage caused by alcohol directly contributes to future alcohol cravings. A lot of people have problems with anxiety - they would be wise to discontinue caffeine intake altogether as it is a major risk factor in anxiety disorders.

Meditation has a host of benefits and structural changes have been observed in the brains of practitioners - http://www.scientificamerican.com/podcast/episode.cfm?id=med...

Exercise is absolutely essential and, frankly, should be a first-line treatment for things such as depression.

There are also a few new and interesting nootropic substances such as Piracetam, Aniracetam and Noopept, the last one being the most effective for me personally.

All the wise folk say that the body should be treated like a temple. I completely agree with them since the mind and the body are but one and the same. Treat the body right and the mind will follow. Treat the mind right and you will feel right.


Interesting. Where have you learned this information? Are there some books / studies / other sources you can point to? (Not a demand for evidence ... just that I'm curious to look more into this).


I've mostly been googling for studies/information which explain things I do not understand. Most of the studies tend to be on Pubmed (www.ncbi.nlm.nih.gov). A lot of references can be found on wikipedia, which is a great resource in itself.

This is paired with playing with neurotransmitter levels using the methods described (and occasionally some less legal ones) and trying to relate feeling to thought. After a while, an intuitive understanding of the terrain that is the body, thought and emotion begins to emerge. Sometimes you feel something new, notice an interaction, make a prediction that neurotransmitter X has relationship Y with neurotransmitter Z, google for studies and are surprised that said relationship has been observed. Intuition is as accurate as the information it's working with - it can sometimes be trusted and other times cannot. I'm a programmer and to me this feels identical to debugging in a messy monolithic legacy codebase. I use the exact same techniques to try to figure my mind out.

I think that it is important to be picky about sources. I do not trust anything which cannot be backed up by a study, though less accurate information can sometimes point you somewhere interesting. I also find it important to tread away from the mainstream with caution, since I'm not an expert in this field. In other words stuff like Reiki is out of the question - it needs to at least be feasible.


This is totally misleading, and may make people feel they are not good at programming.

To quote one of the masters of CS:

The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague.

    Dijkstra (1972) The Humble Programmer (EWD340)


To me, this goes back to the "single page program": all programs should have a top level that can be expressed as a single page of executable code and comprehensibly captures the essence of the program.

If our programming language(s) don't allow for this, see what's wrong and fix it. Lather, rinse, repeat.


So if you need to write a big, complex program, the best way to begin may not be to write a spec for it, but to write a prototype that solves a subset of the problem.

This is so true for me. And not just for big complex programs but also for anything not completely trivial.

The one (applicable to me) part of the joel test (http://www.joelonsoftware.com/articles/fog0000000043.html) that I never made any progress on was having a spec. I just couldn't do it whether through laziness, inability to concentrate or whatever. Actually it's been a bit of a guilty secret of mine.

Generally there'll be some part of the program that I'll be able to solve right away and while I'm doing that I'll be having ideas about how to do something else. Later it might become obvious that a certain part would have been better done another way and if there's a serious benefit to changing things I can do it at that stage.

It's only after a lot of work's already been done that I'd be able to produce some kind of spec for the program

Having a spec that lays everything out beforehand is to me analagous to a mathematician writing the final proof of a theorem before doing all the thinking.


You should never need to hold the entire program in your head! (except when you actually begin coding it - for this early stage of development I agree with PG). Needing to hold a whole mature program in your head every time you work on it is a code smell that tells you your solution ended up in the shape of the most popular software architecture of all time: the "big ball of mud" (http://www.laputan.org/mud/). Now it's obvious why having a big working memory makes you a great programmer - everything usually starts as a BBOM or ends up as one, but the point is to fight this tendency...

Once a program grows, you should architecture it so that you only need to keep the piece that you're working on in your head, ie it should be a network of black-boxes and you should only need to open the one you're currently working at, and even when you do things like large scale refactoring, you should be able to selectively and partially open only some of the boxes to do your job - and this is what programming languages and patterns should help you do!


I've noticed that, beyond a certain point, the term "program" is not actually very useful. Because it's open on the other window, is Facebook a program? Is the news feed a program? The buddy list? The chat features? The photo upload? The status update?

How about the API? Is that a program?

I could easily say that my scripts are programs. They're rarely more than a file large. But "program" is not a descriptive term for them; I say "script" because that explains that they are not services. They run once, do their job, and finish. Or something.

At work, I have a domain focus. There are swaths of code I own, and other swaths that my domain has strong and weak connections to. I know my domain. I can boot it up in my head at will, though it's too large to stick: I have to walk through each room independently, rather than having some manifold presence in every room. And there are a ton of things that I have to look up every time, because I don't actively work on those pieces. But there's no clear division at which I can say, "This is the program. I should put it all in my head."

We just remember everything that we can, and try to remember where to find out everything that we can't. I think that's reasonable.


To avoid "memory overload", I have taken to literally ignoring parts of the system which I currently don't need. I.e., I don't even look at other classes' code before I don't need them. Else, I would have to spend 3 days of understanding it all. That's for very big programs, and of course I do look at the overall structure and what patterns have been used to couple classes together. But if a method promises something with a contract, I won't read through it, but treat it as a black box. My theory is that cognitive power is like money spent during the day. I can recharge it after a few hours by taking a nap of 20 minutes, but I better watch what I expend my cognitive credits for. And spending it all on reading other people's code (which has probably already been revised 10 times) is not worth it.


...this is what we all try to do I guess. You said "I don't even look at other classes' code" but I always find it much easier to not look at a function's code or at the code of a method of an immutable object or a "predicatively mutable" objects than for classes of highly mutable objects.

...that's why I'm currently investigating functional programming as way to make it easier to hold larger parts of programs in your head.


Holding the entirety of the problem in your head is a necessity when the program is an algorithm, and especially if its a hermetic algorithm.


...that's probably why algorithms are never too big to hold in your head (except crypto, for my head at least :) ), at least once you understand them ...and software tends to be "overgrown" at the interfaces, be it UI or IPC or some networking protocol, not where the "complex" algorithms are anyway (probably because if you can find a mathematically expressible algorithm to do something it doesn't overgrow like a mad bush, even if you add a dozen branches for special cases...)

...and I'm not entirely sure that most algorithms are truly "hermetic" ...maybe in some areas most of them are, dunno...


I agree with the power of holding a program in one's head, but I also consider this a (the?) serious bottleneck in the software engineering.

I hope we discover a scalable alternative to holding a program in one's head. It doesn't need to be as good as holding a program in one's head, it just needs to approximate it.

(Edit: see also Design Beyond Human Abilities by Richard P. Gabriel.)


We have - it's called an API. The way to work on big programs, too big to hold in your head all at once, is to break them into a bunch of little programs, each of which does something you can hide behind a simple interface. Then you hold all of the library in your head at once, and possibly repeat the same process.

The complexity comes in that this really does just approximate holding the whole program in your head, and sometimes you find the API is not adequate to what you want to do. Then you're back to square one, except with a codebase that by definition is too big to hold in your head.


It's also the primary driver behind the principles of object oriented programming - but those are almost dirty words these days. Somehow OO tried and failed in many ways to solve this problem, but I think it gets too little credit for at least having its heart in the right place.


I think OO was a first (and pretty good) stab at the problem. The primary goal was recursion, so having things be self-similar, and Alan's insight was to make the small things look like the big things (lots of little computers interacting), rather than making the big things look like the small things (everything is a function,...)

http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltal...

Back in 1972, we obviously didn't have a lot of "big things" (e.g. large distributed systems) to look at, so instead analogies and intuition were used. Considering, the results are not too shabby, IMHO.

Nowadays we have the Interwebs, a huge distributed system that appears to work most of the time, and so one might ask if we can apply the same principle and get newer and possibly better results: if we want self-similarity, should our computations internally look like the web?

We also have lots of experience with the systems we've built, and things like the Patterns books that tell us where our means of capturing common behavior fall short (if it's a pattern, it's repetitive; if it's repetitive, I should have been able to factor out the common parts).

So I am not sure "failed" is the right way to describe it. As another poster pointed out, we are now capable of building much larger systems than before, and OO seems to be largely responsible for that.

Another perspective is that Smalltalk (again) was never intended to be something final, but rather a basis for obsoleting itself, so describing OO as failed seems akin to calling the first stage of a Saturn V a failure because it didn't get to the moon.

Where I do see a failure is that we entered an age of stagnation, and rather than see OO as a first stage to build even better things on top of, we pretty much stopped. And in that sense, I guess OO did fail, because the second and third stages that were supposed to get us to the moon didn't really get built.


Even more powerful approach is minimalistic design with objects placed where they belong, to achieve harmony. Design, that among other things, minimizes a number of objects required to hold in one's head.

Personally I call it a design with good Feng Shui :) When things and objects are in their right locations, and in harmony, one don't have to remember where they are! One can just remember the harmony and find them in their places.

That's precisely why Python is so damn good, by the way.


I would argue TDD is a workaround which allows you to do meaningful work in a team context without holding a complete program in one's head.

It gets everyone to put their thoughts about what the program should do in one place. Every time you run the test suite, you are outsourcing to the test suite the task of running through your mental model of the program and thinking "what did I break".

The limitation of TDD is that it blocks rapid iteration in what the program should do. When you load the entire model of the program into your head, requirements change at the speed of thought. With TDD, they change at the speed of a lot of reading and typing.


>>> "The limitation of TDD is that it blocks rapid iteration in what the program should do. When you load the entire model of the program into your head, requirements change at the speed of thought. With TDD, they change at the speed of a lot of reading and typing."

I agree 1000% percent with that. You articulated very well what I see as the single biggest drawback of TDD.

It's as if you're still working with clay, but you put metal around it every 2 seconds, so you lose the benefit of working with clay.


TDD done correctly should allow the validation of "speed of thought" changes. It happens to me often that ideas spawned between my ears break functionality in the actual application if the application is complex. It doesn't mean that the idea was invalid, tests simply illuminate what else needs to be changed if you'd like to go forward with the change.


This is very similar to how a lot of writers that do split point-of-view write their stories. "Load" all the information needed for a particular point-of-view, write all of theirs chapters, unload, and repeat.


Rings true...

I definitely work best in 4-10 hour focused sessions, can easily do a weeks worth of work in one go. I can't always make those sessions happen though. Coding block? I guess I should write. [1] :)

Around the time this was written, I was working somewhere you had to break all the rules to get anything accomplished. The un-sanctioned stuff was of much higher quality and function.

[1] http://tommy.authpad.com/understanding-and-combatting-coder-...


Peter Naur wrote on this subject in 85 and came to much the same conclusions.

Here is the paper for anyone interested http://www.google.dk/url?sa=t&source=web&cd=1&ve...


I find it interesting to read this again after a few years only to realize that Paul is in direct contradiction of those who would have us take a break every 25 minutes with his second point. It's kinda refreshing to hear this point of view again. I worked 5 solid hours last night, starting that particular stretch around 7pm and got around 3-4 normal work days worth of work done.


Yesterday a friend explained the physiological reasons behind 25 minute breaks: basically the muscles (including eye muscles) go into a different state of relaxed-tension and its generally not good for them to be in that state while the body is sitting-looking.

There's also the issue of the effort required to concentrate on a task, which gives diminishing returns as you extend it beyond ~25 minutes without break.

However a mental flow state (what you seem to describe) requires little to no concentration effort and so can yield great returns over the time spent. The tradeoff is that the body is in a less-than-desirable muscular situation for a prolonged time - but I'm guessing the damage there is not so significant if 8 hour flow sessions are not a daily habit.

tl;dr: the 25 minute break thing is probably a good general guide, but shouldn't stop you pulling an 8 hour session if you are absorbed and productive


> Maybe we could define a new kind of organization that combined the efforts of individuals without requiring them to be interchangeable.

...reread the whole thing and this thing got my imagination "high" ...I feel there's some deep wisdom in here ...maybe we do thing wrong in most of our organizations by requiring complex components like people to be interchangeable, instead of accepting the fact that they are not and that the "personality" of an organization should be allowed to radically change as people (or bigger "unique" subsystems) become part of it or leave it... maybe we could even end up with what Nassim Taleb calls "antifragile" organizations...


It's not just the "personality", though. It's also the idea that if your sysadmin gets hit by a bus, your site doesn't become unusable.


yep, the compromise is probably to separate maintenance from creative tasks, and accept that maintenance tasks need replaceable components, but then again, maybe it's ok to accept things like having a certain software component written by a "lone wolf eccentric genius" in a dialect of lisp he alone can understand (replace with your fav equivalent phrase) as the price for having really unique features and performance for that component that no competitor can match, and make contingencies for the risk of having to throw away that codebase and rewrite from scratch if he gets hit by a bus... maybe if the APIs and interfaces are properly designed (or processes or however else you may call then in peopleware land), you can accept working with unique and not easily replaceable components and somehow design systems that are architected to embrace the "hit by a bus" type of risks

...and if I think further along these line, the organization that can best afford these types of risks are big software corporations (think Google, Microsoft) that could afford to scrap entire codebases and projects (if they ever got over the "mind brakes" that make the managers consider such things insane), or start things in completely new directions when they bring new "genius visionaries" ...these types of innovation based on risky and irreplaceable components/people would be prohibitively expensive or impossible for start-ups, but may bring us new breakthroughs in things like general purpose AI or god know what

...maybe true progress really is the work of unique individuals and our whole focus on "team work" and "replaceable peopleware" is what suffocates and kills innovation


Rich Hickey made some very similar points in his Strange Loop presentation "Simple Made Easy" http://www.infoq.com/presentations/Simple-Made-Easy


I'm not so sure about the "don't touch other people's code, and don't allow them to touch yours" bit. What if someone gets hit by a truck? Or leaves the company? Or shrug makes a promotion? I've been pretty comfortable about the "collective code ownership" approach, within small enough teams, in the last 4 years. You can talk issues over with a colleague and he knows what you're on about, code reviews become effective, and getting something important changed does not mean you have to wait for the 'owner' to have time for it. What's wrong with that?


Exactly what I thought - I only 'own' the code if it's a topic branch, once my peers reviewed it, they should know it as much as I do.


Not sure about working for long uninterrupted sessions(I tend to believe that short, self-scheduled breaks increase my productivity, as per the pomodoro technique), but I think the notion of loading a 'context' into your memory is spot-on. I know that, at least for me, it is difficult to even begin working on a problem unless I have that initial context in my head; without a full understanding of the problem at hand, I feel like I can't do anything well.


Perhaps the key to a productive software industry is - paradoxically - to de-industrialize. Model the organization and communications patterns on a guild of independent workshops, each workshop comprising one artisan developer with one or two apprentice developers, with each workshop supplying hyper-specialized services based around a library of existing, re-deployable functionality.


Are there any applications (mobile, web or whatever) whose sole purpose is to help a programmer capture the program that is in his/her head?

The first thing that comes to mind is UML based applications like Rational Rose. But those seem to have such high barriers of entry and are more geared towards communicating the program to other developers or different stakeholders.


One good test for this is if you can explain in abstract terms, and preferably to a non programmer, how the program works.


I just came across this article in a HN post last week that was discussing a similar theme. I think it does a great job explaining the challenges of beings programmer...

http://alexthunder.livejournal.com/309815.html


This is a fantastic summary, and I thank you for sharing.


I wonder what the most complex system is, to date, that has been made and that can all fit in one head.

A recent example, though not a program, is Mochizuki's proposed proof of the ABC conjecture. I wonder what program would be comparable to that in complexity.


I do it. I build mockups in my mind, run through clicks and interactions and finally the code flows.


Wow, this is particularly true when you're not rapid prototyping. An aha moment happens when one is able to reverse think a complete chain of events leading up to a catastrophic failure because you saw a tiny UI/UX bug on the surface.

Paul has written such splendid pieces that should resurface from time to time. This thread has happened [1] to YC community before.

[1] http://news.ycombinator.com/item?id=2988835


work in long stretches

Nope, that idiotic open offices isn't conducive for that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: