Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll repeat my comment from a different thread:

"I've never really agreed that numbering should start at zero (or at one), I think people use indices way too much and it gets in the way of clarity. I much prefer whole array or list operations with no fiddling with indices and off by one errors. I like Haskell's array API, for example. You can index by whatever is natural in each case and you can always get the whole list of valid indices for an array x by using "indices x". I think that's much nicer, and it's also what D is doing: instead of having a(n implicitly paired) starting and ending iterator to indicate a range of positions like the C++ library does, D's library uses a range object. Basically I'm saying you can represent a range of indices as a pair of indices, one pointing to the first valid position and the other pointing past the last valid one, but that's an implementation detail you don't need to expose, make a range abstraction and you'll be happier."

"I should also point out that mathematicians don't number at zero unless there is some advantage (the default is to start at one), but more importantly, the preferred style in mathematical arguments is to avoiding fiddling with indices as much as possible (since it's so easy to mess something up working at such a low level of abstraction)."



> I should also point out that mathematicians don't number at zero unless there is some advantage (the default is to start at one)

I disagree. While this may depend a little on which areas of mathematics you study, I've found that in situations where there is a reasonably clear / natural / non-arbtirary preference, it tends to be for zero-based natural numbers.

On the other hand, situations in which 1-based numbers are used, tend to look more like arbitrary aesthetic preferences. They tend to be situations where no particular `origin' is any better than any other, and one could (despite the ugliness) use 2- or 3-based numbering without adding any additional corner-cases.

My personal favourite `reason' that the set of natural numbers should include zero (and that numbering should start at zero) comes from set theory, where cardinal numbers are equivalence classes of sets of the same size. 0 is the smallest such number corresponding to the empty set. (For numbering sequences, 0 is also naturally the smallest ordinal number).

These kinds of reasons tend to crop up in category theory and other foundational topics too, which are some of the areas of mathematics closest to theoretical computer science.

Interested in counter-examples though; I'm sure at least some exist.


Well, whole numbers come up in mathematics in different ways. Of course if you're talking about cardinality you should include zero: it simply is the cardinality of some set, you can't avoid that.

I was talking about a completely different use of numbers: numbering, that is assigning numbers as labels to things. There I think mathematicians on the whole prefer to number starting at 1 (or not to number at all and work with abstract indexing sets). Sometimes it is convenient to number starting at zero if it simplifies some formulas, but usually it doesn't matter.


To continue with your example, take cases where a (finite) collection of discrete objects is summed over (unioned over, whatever).

You'd let N be the number of such objects, and then twenty following pages of text would contain summations from 1 to N. This is more compact than summations from 0 to N-1. And in a certain sense, there's one less "token" to remember (i.e., if you start at 1, you have 1 and N to remember, but when you start at 0, you have N, 0, and N-1 to remember).

The one that's more compact tends to be discipline-specific.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: