Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AMD's vision of "heterogenous computing"(I think they came up with a new name for this?) would essentually be the "true" fusion of a CPU and a GPU in a sense that the chip consists of general purpose execution units(current CPU cores) and of special purpose parallel processing units (curent GPU SIMD/VLIW units) which operate concurrently in such sense that both sequential and parallel code can be executed efficiently on per-thread basis and as threads processed concurrently.

Something like this would really help with things like raytracing. But then again, as Carmack mentions, the huge disadvantage are the acceleration structures which need to be discarded and rebuilt per frame for dynamic objects. It's like saying that raytracing is by it's nature inferior(or at least very wasteful) from performance point of view. It's like a problem to which you simply don't say "Just throw more hardware at it!".



Rasterization also requires "acceleration structures," kept in memory. I could be wrong here, but I think the point John was trying to make was that there was again a constant factor handicapping ray-tracing. But constant factors are, well, constant, and in 2050 we may well have ray-tracing done in hardware delivering scenes that are indistinguishable from reality.

I wrote a ray tracer once, but it was primitive. So I'm not completely talking out of my butt, just mostly.


> Rasterization also requires "acceleration structures," kept in memory

Yeah but surely not for the geometry itself. In rasterization, we may use simple low-overhead acc. structures to efficiently traverse a relevant sub-set of the whole (but coarsely described) scene for some fancy culling, collision detection (OK that's not really rendering) ... but geometry (vertices, polygons, vertex attributes) does not need to be traversed like that and thus does not have to be stored as individual triangles in an octree or bounding-volume hierarchy or what not. Quite a difference in overheads here. In GPU terms, with rasterization you have geometry neatly stored in vertex buffers and an awesome Z-aware traversal method with vertex shaders. In a simplistic current-gen fragment-shader-based raytracer, each pixel traces a ray traversing through your acceleration structure which may be stored in a volume texture (ouch, so many texel fetches...) since vertex buffers are not sensibly accessible in a frag shader.

> scenes that are indistinguishable from reality

This would also require a high dynamic range output device. Looking directly into perfectly raytraced sunlight still won't glare and blind me like the real world does, but indeed, by 2050...


> This would also require a high dynamic range output device. Looking directly into perfectly raytraced sunlight still won't glare and blind me like the real world does, but indeed, by 2050...

You could try to look directly into a beamer instead of onto the screen, if glare is all you want. ( And by 2050 I would suspect that the scene is directly copied into the frontal lobe, bypassing the visual cortex.)


Rasterisation doesn't require any real acceleration structures. There are various caches, but nothing comparable to a kd-tree or BVH.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: