Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost as bad as the bug where they made light the same speed in all reference frames. I heard they didn't even fix the bug, they just put in some wonky fixes that mess things up when you go really fast or get close to the world boundary, since they figured nobody would ever do that!


I think that's a feature rather than a bug -- or, rather, a requirement for preventing buffer overflows. A hard wall at the limits of the addressable memory would be too obviously artificial, so that was a no-go. But time-dilation allows them to push faster particles into lower and lower priority threads. A lot of handy optimisations there.

No, where I think they really screwed up was in level of detail -- you know, where you get to smaller scales and start generating details procedurally rather than pulling geometry out of memory. I mean, I understand why it's necessary to do that -- who wants to store the position and momentum of every particle in the observable universe? -- but they could have at least faked some kind of continuity between observations, rather than calling Math.random() every time!


And that damned issue with shared memory of entangled particles. That will not end up well.


It's not really shared memory. It's kind of a compression hack that coalesces a particular state matrix of a whole set of particles into a single vector and when you read out the state of a particular particle it unfolds this coalescion by partial orthogonalization, starting with a random state vector from the spanning vector space.

You're guaranteed that for any other particle of the remaining set, the state vectors are orthogonal to the state you just read out. If you do the experiment with two entangled particles, by reading one, you'll immediately know the state you're going to read for the other one.

If you do it for more than one particles, for each state you read you reduce the size of the remaining set of vectors that my come out.

From a compression/encoding point of view it's kind of neat. If you do it for lots and lots of systems of many particles in a certain microstate, on average you're going to end up with nearly identical results for each total readout process, although the precise values and the order in which they appear will vary wildly.

Now because all this iterative state unfolding more or less comes down to be a kind of hash function you want to make sure, that users of the middleware don't rely on hidden internal state, or assume some kind of hidden seed. The downside to this is, that this particular implementation detail destroys locality, which kind of goes against the whole idea of the fixed-event-propagation-differential system, that aims to isolate high energy processes from neighboring parts of the simulation by easing their timespace metric.

There are a few corner cases (which actually came to happen in a lot of instances in the simulation), where out-of-bounds stress-energy densities are (successfully) isolated from the rest of the simulation, leaving visible to the rest of the simulation only a meta-description of the contents inside the region, that boils down to mass, charge and spin (where due to some interesting interaction charge and spin happen to have the same kind of visible effects on the outside timespace metric) and the surface area of the boundary region. However right at the boundary region, the cursor iterating over the aforementioned state vector unfolding may cross into the isolated region. At first it looked as if this could break the simulation. But it allowed for a wonderful hack for an incremental garbage collection inside the isolated regions, by treating the whole isolated region as a single meta particle, holding N instances of the state vector, where N is proportional to the surface area of the boundary region. Randomly selecting one of the quasi frozen states from inside the isolated area, we can call its destructor, be unfolding its complement an entangled particle that happens to by just outside of the boundary region.

This goes nicely with another hack, introduced early in development: The on-demand spawning of entangled particle/antiparticle pairs, which can be used to transmit forces between the actual particles you want to simulate.

By applying these on-demand spawns on the isolated regions, it turns out, these regions can be garbage collected, by kind of "evaporating" their contents through a entropy maximizing process, thereby avoiding the need to faithfully reconstruct the original information; instead the remaining hash value is uniformly distributed over the simulation and used to seed the entropy pool from which random numbers for the unfolding process are taken.

Kind of neat, don't you think?


Oh you're good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: