Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In this simulation, the amount of money you have does not affect how much you have to give away per round (until you get to $0, in which case you give away nothing) or how much you will get per round (until some people get to $0, in which case the money you get is less on average). So in most cases, with whatever you have, you are expected to see a gain distribution with zero mean. Since it's not a self correcting process other than at the $0 end, it's not surprising to see (1) a snapshot with uneven distribution, (2) no one dropping out permanently.

Actually I don't know what is the point they are trying to make with this video. There is neither anything surprising in the general sense nor a solid statement about what is specifically seen.



The point is that it's counterintuitive even to intelligent people, and that matters because politics and economics are largely driven by the accumulation of intuitive decisions.


How would it change if it was 1% instead of $1?


I just ran it at 1% instead of $1.

On only 5% of days was the spread greater than 80 -> 120.


I went a little further and ran a simulation [^1] with random payments in the range from $1 to a fixed percentage of the payer's wealth.

When this percentage is in the range 2-10%, the wealth inequality is significantly smaller than in the base case (i.e. all payments are $1). Then it starts growing again.

[^1]: https://gist.github.com/lou1306/1041ed6cd4eed433cfabf45f666b...


I doubt it would change very much. I wrote a program a few years ago similar to this [1] and I tried several different amounts of "payment" (fixed and random) and each time the result was the same.

What I did not track was the number of times a "person" became rich (for example, how many times a "person" had over 90% of the money) as that did not occur to me.

[1] https://news.ycombinator.com/item?id=14282863




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: