> and if it weren't for the RTX graphics cards setting the precedent, there would probably be more research into making things prettier without having to "brute force" things.
Not really. A lot of this research included making rudimentary versions of RTGI that could run on hardware of that age(see SVOGI). Of course you can bake everything and make amazing looking levels, that are completely static. But that doesn't work with open-world games, especially if they have different times of day. And it doesn't work in highly dynamic scenes. We basically hit the end of the road for scaling graphics without RT. And now we have hardware that can do realtime path-traced rendering of major AAA titles, and it looks miles better than anything smoke and mirrors could've ever delivered.
I'd rather have more RAM in the GPU and give up the ray tracing hardware.
The real problem in the industry is that GPUs are not getting cheaper. It cost about $250 for an 6GB NVidia 1060 in 2016, and it costs about $250 for a basic NVidia card today that can do slightly better.
Meanwhile, people are buying $100 WalMart laptops and trying to play games on them.
You can buy RX 6600, which is roughly 2x faster than 1060 and has 8 gigs of VRAM for 200 dollars[1]. If we take inflation into account, 1060 did cost around 300$ in today's money. So basically you are getting 30% more ram and 2x performance for 30% less money. In other words you are getting performance 20% better than GTX 1080 for 4x less money(launch price of 1080 was $600). And it uses less energy too, so TCO difference is even higher. And I'm not even talking about expanded feature set.
I mean, you could have a bunch of other things in the GPU instead of those cores, but the reality is that ray tracing serves multiple target markets for GPUs very well, and actually meets a demand people have: better graphical quality. In contrast, while I'm jealous of my friend with a 16GB 4080, versus my paltry 10GB 3080, it does not realistically impact anything I do outside of occasional ML retraining tasks. Neither did my old 8GB card. Gaming workloads just generally don't need it, because people there are mostly bound by compute, not texture or RAM limits -- and solutions like super-resolution/upscaling instead improve performance and memory usage all at once.
> and it costs about $250 for a basic NVidia card today that can do slightly better
Eh, the 3050 is like 30% more efficient in every single benchmark versus a 1060, with 30% more VRAM, and costs like $280. I don't think 30% is "slightly better". The Arc A750 also is around the same price point and is nearly 80% better, but I admit Nvidia has the best software stack on the market at the moment so the 3050 is a more fair comparison.
Not really. A lot of this research included making rudimentary versions of RTGI that could run on hardware of that age(see SVOGI). Of course you can bake everything and make amazing looking levels, that are completely static. But that doesn't work with open-world games, especially if they have different times of day. And it doesn't work in highly dynamic scenes. We basically hit the end of the road for scaling graphics without RT. And now we have hardware that can do realtime path-traced rendering of major AAA titles, and it looks miles better than anything smoke and mirrors could've ever delivered.