Why does pretty much all the marketing and examples of "groundbreaking" new/faster tech for raytracing/pathtracing always just show outside IBL-lit scenes?
These are easy to resolve (generally within 32 progressions with MIS for diffuse surfaces).
Similarly, they rarely show anything other than perfectly specular glass, or pretty rough microfacet surfaces, which with indirect caustics off (which they obviously have in the above examples), again is pretty trivial to resolve.
Indoor scenes with many lights (lots of them sometimes occluded) with lots of indirect illumination is going to be a lot slower, and for games I would have thought this is important: I don't see how this latter scenario is workable in real time, without lots of cheats.
I think you answered your own question. I know that you know that noise in ray tracing is very inconsistent. It's easy to get people excited about "real time ray tracing" because people expect it to "happen" soon when most people have no idea about the nuance of what affects speed and noise in ray tracing.
If you gonna cheat just give up on RT - this is what precomputed shadow maps, light maps, speculate maps etc are for together with various cheats to clip shadows when they overlap or already at max "darkness" same thing with lights when you have multiple lights sources you usually approximate by stacking the combine illumination values for light sources of the same direction.
I really don't understand why are they chasing RT.
Also would be nice to know how well they can actually compute shaders, do tessellation and many other things because this is one heck of a fluff piece and I can remember at least 1 of these past 20 years that proclaimed that real time RT in hardware is here.
> I really don't understand why are they chasing RT.
1. The highly-parallelizable nature of RT is ideal for many-core architecture we're heading toward. Note that the PowerVR hardware used in the demo has 4 cores, needs no fans, and uses 10x less power than a traditional rasterization-optimized GPU.
2. The simplicity of implementing physically-based effects leads to a much easier time for artists.
I would argue that physically-based raytracers actually make things more difficult for artists, because they tend to be less flexible. Being physically correct is great if what you get out of the renderer is exactly what you want, but as soon as you need to cheat anything to achieve a particular look, it's much easier to not be bound by physical correctness.
The Disney BSDF [1, 2] is extremely flexible and offers a lot of artistic freedom. The only real limitation is that the BSDF is required to be energy conserving, but that's hardly a limitation. A physically-based renderer is still able to produce non-photorealistic results.
Maybe, depends on whether you're doing CG animation (Pixar, Dreamworks style) or more realistic stuff - VFX, photoreal stuff.
If the former, then maybe a bit.
But the beauty of physically-based is that in many cases, you can take a model that's been textured and lookdeved and it will work nicely in many different lighting setups. Before Physically-based, you practically had to re-do stuff completely when you changed the lighting to get the look you wanted.
However, Physically-based still isn't a complete win - you can't looked something close up and expect it to look good in the distance - it just won't. You need to be aware of the LODs and create different asset variations appropriately for different distances.
Pixar switched to a path tracer for Monsters University, and it made it easier for the artists (once they unlearned the tricks and hacks they would have needed for less sophisticated renderers).
PathTracing (Though not regular Ray tracing) is actually pretty tricky to parallelize. I'd be interested in hearing what their hardware actually does? I'm guessing some hardware implementation of ray intersections with a BVH or kd-Tree?
The problem that occurs when trying to do path tracing on current hardware is that secondary (reflected) rays are incoherent so the gpu will stall one ray while waiting for another that hit something, leading to poor utilization of the gpu. It's still normally faster than on a cpu, but compared to "simpler" problems like some linear algebra (Machine Learning or similar) the perf of gpu vs cpu is low.
I thought GPU techniques do something like accumulate the secondary rays and use really fast GPU sorting algorithms on them before continuing, to bring back coherence.
Rasterization also is of highly-parallelizable nature and this parallelism has been exploited to great effect in the last decade(s). Given that Ray Tracing's only advantage over Rasterization is specular reflection it seems unlikely it is really superior in terms of efficiency. Especially since specular reflections and many other effects can be achieved via cheating in Rasterization.
For real GI, Path Tracing, Photon Mapping and so on are the methods of choice and not Ray Tracing.
Global Illumination, Path Tracing, Photon Mapping and Ray Tracing all belong to a category of algorithms that are better suited for multi-core general processors -- not the SIMD hardware of today's GPUs.
GPU's haven't stuck to SIMD parallelism for quite a few years now.
NVIDIA is not there yet SMT/DPT isn't true parallelism but it's good enough even for path tracing. AMD GPU's are pretty much truly parallel with every execution unit capable of executing any instruction asynchronously.
Ray tracing has other advantages besides shiny reflections: it can handle refraction, shadows, non-triangle-based geometric primitives (including CSG), and has less sensitivity to the total amount of geometry in a scene.
Global illumination is where we want to go to surpass the realism of current games, but path tracing and photon mapping are both fundamentally based on ray-tracing. Any hardware that makes ray-tracing go faster (especially for incoherent rays) should also help to speed up global illumination.
Their hardware actually combines hardware rasterization and raytracing.
Depending on the implementation, raytracing can have many benefits over rasterization, such as:
- ability to produce pixel-perfect shadows
- sub-linear complexity on the amount of primitives (vs. linear for rasterization)
- rays need not be coherent, i.e one can render non-linear projections or lots of small views
Path Tracing also is just another form of Raytracing. They demonstrated that their hardware can be used for it (just read the link).
> Also would be nice to know how well they can actually compute shaders, do tessellation and many other things because this is one heck of a fluff piece and I can remember at least 1 of these past 20 years that proclaimed that real time RT in hardware is here.
The chip iterates on "standard" GPU designs made to performed rasterization, therefore it has everything in place to launch compute jobs or do tessellation. In addition, they have some additional hardware for raytracing. From one of the previous GDC presentations it was more or less clear that you could programmatically activate raytracing with a configurable number of rays and those rays will deliver additional lighting contributions to the objects of the scene.
RT is a rasterization method it doesn't not save you from actually having to build the scene as in textures, materials, and transform it to the players viewport, for this you need many other things than RT which is why info about it would be appreciated rather than saying we can do RT at 1/10th of the power consumption NVIDIA does.
As I was saying, the GPU pipeline we are all used to is still there. I have posted in another comment some presentations from the GDC 2014 which you might find interesting.
RT is by far the least efficient method of doing that.
One of the issues with RT is that it actually creates "unrealistic" lighting at least as far as computer entertainment goes the global illumination tends to be way too uniform and washed out.
It looks great on the big screen when your eyes can adjust to the brightness of the screen which is the most luminous object in the room but it looks really washed out on a monitor in a well lit room.
Agreed, but the "grail" is you invent some technology where the GPU can RT a scene in real time with all of the illumination identified just prior to the frame it is used in, and now you save a ton on artists and modellers who are spending their time making lightmaps etc. Just point the magic gizmo at your geometry and "poof" perfectly lit scene.
No you don't have tons of artists and modelers making light maps etc, they are in 99% of the cases made automatically depending on the engine and modeling software you use you can render objects using what ever method you want in non-real time including raytracing and the software can generate things like specular and shadow maps.
>Just point the magic gizmo at your geometry and "poof" perfectly lit scene.
There are plenty of global illumination models that do not require doing full path tracing e.g. Radiosity https://en.wikipedia.org/wiki/Radiosity_(computer_graphics) which are much more efficient than ray tracing (even on dedicated RT hardware fyi) while resulting in pretty much an identical image, path tracing also works pretty damn well.
One of the main reasons that these aren't use isn't because the current hardware isn't fast enough but because the resulting image just doesn't look right because of various factors and has to be then highly adjusted to increase it's dynamic range.
Global illumination creates brilliant results when your eye perceive it as "real" illumination like for example when sitting in a dark cinema when the only light source is the light bouncing from the screen which means your eyes only adjust to the brightness level of various areas in the screen which allows it to maintain relatively high dynamic range even tho it's pretty much uniformally lit.
With computer monitors this simply doesn't work the monitor is most cases isn't the brightest object in the room as you have both day light and indoor lighting.
The monitor in most cases doesn't take as nearly as much of your view as a movie screen does and modern cinema projection solutions can actually provide high dynamic range by having different luminosity levels for different parts of the screen while your monitor is lit by a single diffused light source.
This pretty much means when you use global illumination you get much much lower contrast between lit and shadowed areas as well as overall darker lights and lighter shadows.
Sure the scene is universally lit but it just doesn't look like what people expect partially because they are used to direct illumination as it has been mostly used (with a few exceptions mostly due to the issues brought up here) for the past 20 years of real time 3D graphics.
So when you do use global illumination models (doesn't matter if they are precomputed like Valve's implementation of Radiosity, DICE's real time RS, or Nvidia's Gameworks Path Tracing) you either get a really washed out look or you have to tweak the hell out of it during post processing to give it a more stylized look.
And again Ray Tracing doesn't save you from shaders you still need to write the same material shaders as you do today some of them might be slightly simplified if you are doing various things like fetching different specular/reflection maps based on the viewport or scene composition etc. But overall you will still have the same shaders that make one door look like glass while the other like wood because it's silly to have to make 2 doors when you can just adjust a flag in a shader.
Primary rays cache, secondary rays trash. That's why. If anyone comes up with a powerful enough system to do truly useful and fast raytracing in realtime, they will show it.
I wonder if the Sorted Deferred Shading algorithm from Disney's Hyperion renderer could be adapted to realtime rendering? Optimizing a renderer that does one frame every 10 minutes to produce one every 16ms is difficult, but a scaled-down version with a lot of fakery might be possible.
It won't - at least for general ray intersection itself, the deferred sorting doesn't buy you that much - the main benefit it gives you is much more coherent texture accesses, which Disney needs because they use PTex which needs coherent access for the filtering.
Almost everyone else is still doing pretty much shade-on-hit one-at-a-time and getting pretty good performance out of it.
PRMan RIS is batching shading points up, but after the first bounce the batches get much smaller, so the win you get from it is much less.
Disney do a lot of work sorting the batches, and that takes time.
Also their render times for full frames are generally 12 hours wall clock or more (like most of the rest of the VFX industry). And Disney are heavily de-noising their images afterwards anyway.
You're talking about an algorithmic problem (of a path tracer), this hardware just traces rays. It supposedly does so at 10x the efficiency of a high-end GPU (which isn't designed to do raytracing in the first place). That's the "groundbreaking" part.
For realtime use, this hardware will likely do no more than a single directional light's shadow plus and single bounce of sharp reflections. This is then combined with a rasterized view of the scene. Keep in mind that this is a "mobile" GPU that hasn't been "scaled up" for the Desktop yet.
For offline use, this hardware will at least be more efficient than a GPU.
Why is 'sometimes occluded' important? For each surface your ray hits you need to look at all lights, and if more of them are occluded that's better as you have less computation to do to add that light in don't you?
Because in path tracing, when you have many lights, you don't want to be sampling each light each bounce - that just doesn't scale due to the explosion in number of rays you have to send to test occlusion. The whole point of path tracing (generally anyway, and especially for games I would have thought) is each progression gives fast updates, but with not necessarily great results quality (which you make up for by averaging loads of progressions). If you sample each light each ray bounce/progression you'll get much better results, but it will be much slower.
So you randomly sample/pick one, or a few (maybe vary it based on the current ray importance or throughput) to test.
So given that you can't sample all lights (efficiently anyway), you have to try and efficiently cull/find lights which are visible to the surface being shaded/lit. Because otherwise you might be sending test occlusion rays to lights that wouldn't even contribute anything even if they weren't occluded.
It's close to impossible to do this perfectly (in an unbiased way, anyway), and even if you do a pretty good job, it's extra computation, and the fact that certain lights are sometimes occluded significantly adds noise to the lighting.
These problems can be overcome (somewhat) by using more clever path tracing such as bidirectional path tracing, and mutating "successful" paths instead of creating new ones, such as in Metropolis Light Transport.
Determining which lights are occluded is non-trivial and can't be done on at a low level (too many checks) thus you need to precompute it on some level, but if different parts of the scene have different sets of lights visible you need to somehow abstract that to keep your parallel computations going.
All very doable but are the hard problems with the tech that should be solved before we say that ray tracing is solved.
How times change, roughly 20 years ago I did real time ray tracing of some spheres (can't get simpler) on a DEC Alpha cluster and was amazed (low resolution).
Imgtec's linux drivers are also pure garbage in the security sense. I would never install one of their drivers, it would essentially be the same as installing a backdoor.
Photorealism in video games has arrived. It's arrived at least 6-7 times, depending on how you count console generations. Even 16-bit consoles were called photorealistic in their day, and then their successors were, and then THEIR successors were, and so on. It's almost as though we're asymptotically approaching a carefully-chosen marketing-driven goal that can never actually be reached, but towards which we can continually make (ever-tinier) steps.
>
Photorealism in video games has arrived. It's arrived at least 6-7 times, depending on how you count console generations.
That's a very good point. Seems to me that it applies equally well to movies: the first 3D animated movies were thought photorealistic in their day, but nowadays they look pretty dated.
Up until the PS2 things were so far from any sort of realism, this thing was an emotional appeal for the virtual reality desire in our brains. But yes, the PR argument is shallow pornographic tease for the consumer brains. I may be too jaded but realism has almost no appeal since long ago, I now 'drool' on limitations and how old games with so many crippling constraints were able to give you timeless memories. The only value of more rendering capabilities is the equivalent of video games magazine centerfold.
ps: I remember first time I ran Half Life, my computer was so slow I had to run in 320x200 with zero option, it was a pixel soup yet it sucked my soul. Rainbow 6 managed to lock me down for hours too. I understand the desire for more possibilities, but we've reached this level long ago, I'd say with the 2nd massive-city GTA.
What you say may be true, but there's a difference between 16-bit photorealism and the photorealism possible today, and the difference is that it's possible to find rendered images that are hard to distinguish from real life today. In the 16-bit era? Not so much.
To give some examples, take a look at these images created in Blender:
Most of those are pretty evidently distinguishable from real life...
The issue here - and the point being made - is that we are already at the stage where many CG images can fool even close observers, and we've been here for some time.
We've been passing around these "CGI OR REAL?!!!!" images for a few years at least, and when they are first created people really can't tell the difference, but yet somehow when people get used to the tech and revisit these images a few years down the line they look obviously synthesized.
Ditto older movies like Jurassic Park, whose cutting edge CGI of the time were convincing even when viewing frame-by-frame, but to savvy eyes appears downright obvious today.
So clearly "photoreal" is something we haven't reached yet. IMO the indicator that we've reached "true photoreal" is when an image rendered 5 years ago still holds up to fresh eyes.
its important to remember that path tracing for photorealism is considerably more expensive than raytracing. its like raytracing every pixel hundreds, thousands or even more times, and doing some extra work on-top.
I've experienced photorealism in games, even on last-generation hardware - but only in certain scenes. The PS3 / Xbox 360 open-world crime-drama game Sleeping Dogs, for instance, genuinely looks like a movie in certain areas, on foot, at night, when the weather cycle has gone to rain. On the other hand, driving, during the daytime, in the Central district looks more like a Dreamcast title.
The big challenge I see is not photorealism, but consistent photorealism. In the Sleeping Dogs example, the game's engine is optimized for its iconic scenes of rain-slicked Hong Kong backstreets, but the tricks it uses don't hold up well in other contexts.
I think you're right, that the question isn't "is this game photorealistic?", it's "in what contexts can this game achieve photorealism?"
For a long time, the game industry focused on higher resolution, more polygons, and higher framerates. I think now maybe there's more emphasis on textures, lighting, and physics. The trouble is, if you don't have a physically-accurate global illumination solution or physically-correct reflection, refraction, and shadows, there are many contexts in which "photorealism" is unattainable. For instance: shining a flashlight around a dark room, seeing your reflection in a shiny car, turning lights on and off and opening and closing doors in a dark building at night.
Well the thing is, photorealism actually can be least - or at least we can imagine what reaching it would look like. A Start Trek-style holodeck, completely indistinguishable from real life, would be true photorealism. I highly doubt we'll reach that point in any of our lifetimes, but that's where we'll eventually end up, if gaming technology continues to improve indefinitely.
No we wont, its no longer paying off to invest that much.
Write a bunch of new shaders - nobody notices anything.
Write a good procedural content generator- suddenly applause.
The gfx-race is over- it died a horrible death on the plateau beneath uncanny valley. Its a real shame, cause the technology invented is just so neat- but that is how it is.
Battlefront just won a ton of awards due to tech they created for that game that makes the terrain look amazingly realistic. It seems like people still notice.
I didn't say for everything and in all cases, but there are cases where you can't tell. GTA V and Battlefront both can produce screenshots with tons of detail (including foliage) that are near imperceptible from reality.
Where games struggle today is hair, skin, and soft fabrics, but there are already good implementations for those that should show up in the next generation game engines.
This will always been an iterative process, but there are times today (and I honestly feel like this just became the case within the past year or so) where games look like reality.
The funny thing in photorealism is that the amount of sharp detail isn't particular high. We easily consider 80's television broadcast photorealistic: none of the films and series aren't criticized because they looked to unrealistic and artificial. Yet a measly low-resolution broadcast with even more inaccurate and distorted chrominance channel can produce video that is for all practical purposes realistic. For still images, photorealism is even simpler. Even a painter can produce photorealistic paintings you couldn't easily distinguish from photographs.
The painter has the advantage that he's a human drawing an image that will be perceived photorealistic by other humans, so his own "is this photorealistic yet" function is the same as mine and yours, i.e. he can alter the image until it looks photorealistic, even if it is not objectively photorealistic.
I didn't have time to cover it in the video but ray tracing makes a huge difference for foveated rendering in VR. Maybe I'll write about it when I have some time.
Intriguing, particularly if the lower-power claim here proves out.
So far nothing meeting the Oculus reqs (970 GTX or better) will fit in a SFF PC, much less a laptop. Part of that is physical size, but 100W+ TDP for a graphics card alone is never going to be acceptable to most people, if only because of cooling problems and fan noise.
Photorealism might be what everyone is looking for, but ultimately it doesn't matter. Many pretty games have come and gone in 5 years, but they've all been out-sold (and out-played) by Minecraft.
This isn't anywhere close to doing a full scene with ray tracing, just some shadows and added reflections. The prettier looking demos are all static geometry. There's one demo with moving objects but all static meshes. No mention how expensive is the preprocessing that they do (building BVHs or other acceleration structures).
This may be a very interesting piece of hardware but it's not something that will revolutionize gaming over night.
Rasterization with post processing tricks may be "cheating" but it still subjectively looks better than any raytracing based technology so far (in anything close to real time). Raytracing is mainly interesting for shadows, which can't be accurately simulated with rasterization. There are pretty shadow mapping tricks but no "one size fits all" solution exists.
It arrives when path-tracing or at least some good GI (global illumination) approximation can be accurately solved in real time.
Ray-tracing has a pretty limited use for photorealism, unless you have a lot of shiny surfaces and such. It can't model diffuse light bouncing off surfaces.
> Ray-tracing has a pretty limited use for photorealism, unless you have a lot of shiny surfaces and such. It can't model diffuse light bouncing off surfaces.
You mean real time ray tracing as it stand now correct? Ray-tracing in a general sense can simulate photography as a whole, just not efficiently (for now).
Many use ray tracing and path tracing interchangeably and consider path tracing to be a form of Ray tracing (Old school RT would be called Whitted Ray Tracing).
Single bounce ray tracing isn't very interesting, what we want is real time global illumination such as with park tracing. For simple scenes this is possible on a gpu today, at least with low res and a reasonable amount of noise. If this chip can do with a million triangles what a gpu does today with a hundred, then it's very interesting (not saying it can).
I'm not sure exactly what you are replying to. The original poster said that ray tracing cannot simulate diffuse illumination, which is of course not true.
Micro effects are typically already simulated by shaders.
Ray tracing could in theory simulate interaction between a frequency of light and the frequency of of medium although the situations where that would be practical would be exotic.
'Semi' real is a very long way from real time when it comes to ray tracing. Because of the exponential requirements for convergence, it is easy to impress people with interactive but noisy tracing. When you wait for it to converge however, the parts that take a long time to converge still take a long time to converge.
I'd love to see any progress in animation. Currently they just do motion capture and it still looks like crap (interpolating animation frames doesn't help the cause either).
Ironically the demo is on desktop Linux... actually it's on a MIPS Loongson board which is interesting. I haven't been able to find one at a sane price.
The results aren't good enough to compete in games. Gamers don't care if the reflections and shadows are more authentic if the shading as a whole regresses by several generations.
Just compare the subjective quality of their "accurate rendering" to what that Nvidia GPU they showed can do with cheap approximations:
The results aren't good enough to compete in games that require an expensive dedicated graphics card that costs more than the device this gpu will go in. FTFY.
Compared to any current phone/tablet GPU, it's going to be a big step up.
Going by their claim of "10x lower power than 980ti" this PowerVR card is burning through 25 watts, so it being a visual step up from Snapdragon and Apple SoCs which draw 3-4 watts for the CPU and GPU combined isn't particularly impressive.
> The PowerVR GR6500 is a mobile GPU. Its die size, GFLOPS performance, bandwidth requirements and power consumption mean that it is comparable to the GPUs already available in smart phones today. But compared with a console GPU or looking towards the smart phones and handheld devices of the future, we see a roadmap that scales in capabilities and performance well beyond the GR6500’s specifications. The PowerVR Ray Tracing technology is fundamentally scalable and the efficiency actually increases as we move to more and more powerful cores.
28nm die size is previous gen mobile GPUs. But would be great to see how this performs for desktop use when they shift to smaller process nodes and scale it to match power consumption and cost with say, GTX 980
Ray-tracing is pretty different from polygon rasterization, so to perform well and take advantage of the technology, I think games are going to have to be written from the ground up with ray-tracing in mind. That's hard to do when game developers use complex frameworks and tools that are designed around rasterizing polygons.
(It's the same kind of problem as you'd run into if you wanted to do game development in Haskell or Rust. There's no reason why that couldn't work in principle, but things get harder when you step off the well-traveled path.)
As an example, polygon rasterization is sensitive to the number of polygons, but it doesn't matter very much where they or how they move around.
Ray-tracing, on the other hand, is relatively insensitive to the complexity of the scene, but it's more sensitive to what's on screen and visible right now, and how much of the screen it takes up. Also, objects have to be stored in 3-dimensional tree structures for efficient access, and those trees have to be re-built when things move. So, rendering the Statue of Liberty in high resolution is fine, but rending a million snowflakes is problematic.
Getting good performance means being conscious of different tradeoffs, and it takes time to figure out a good balance.
The same claim could be made for PowerVR offering in general: why doesn't PVR go mainstream. I mean: you could get PS3 level output for probably tenth the energy so why not? Except that it is mainstream with roughly 50% mobile market share. :) The bottom line is: IMG doesn't produce silicon (except for internal use) so you need a partner to go "mainstream" (whatever that means). You're showing your IP on e.g. CES to find those partners (or to validate existing partners' investment in the platform). That's how IP companies operate, unless they are the de-facto monopoly in some segment.
10x less power compared to a gtx980ti, which is 250 watts TDP. That puts their device at 25 watts, which doesn't fit into the power envelope of any popular device form factor. A tablet should be 10 watts maximum at peak load, a phone should be closer to 1 watt. And it can't compete with desktop hardware either.
This is assuming that they run the 980ti maxed out, which might not be the case. If it isn't maxed out, it's a bit of a silly comparison.
Why does pretty much all the marketing and examples of "groundbreaking" new/faster tech for raytracing/pathtracing always just show outside IBL-lit scenes?
These are easy to resolve (generally within 32 progressions with MIS for diffuse surfaces).
Similarly, they rarely show anything other than perfectly specular glass, or pretty rough microfacet surfaces, which with indirect caustics off (which they obviously have in the above examples), again is pretty trivial to resolve.
Indoor scenes with many lights (lots of them sometimes occluded) with lots of indirect illumination is going to be a lot slower, and for games I would have thought this is important: I don't see how this latter scenario is workable in real time, without lots of cheats.