You'd be surprised just how many layers of abstraction there are between getting something done 'outside' the context of the GPU, and across the CPU/GPU bridge, and getting it done on the GPU in a modern 3D stack these days .. You can do things any one of a number of different ways - pass off a blob of data for rendering, or write shader programs that get compiled for the GPU when the app requires it .. those compilers are not open (shader compilers are an arcane and highly contentious realm of IP-rights-holders in a very competitive and volatile industry) and often-times, the hard work of a 3D developer is spent in moving existing assets (code/resources) from one 3D-pipeline-fashion-runway de-jeur to the next ..
It is pretty arcane.
That said, of course you can write a software renderer and simulate a fair amount of the work that the GPU will usually offload from the CPU - and in many cases this has been successfully applied - e.g. the Emulation world - to the task of maintaining legacy binary assets in lieu of having source code to port. The emu guys have amazing stats in that regard.
It is pretty arcane.
That said, of course you can write a software renderer and simulate a fair amount of the work that the GPU will usually offload from the CPU - and in many cases this has been successfully applied - e.g. the Emulation world - to the task of maintaining legacy binary assets in lieu of having source code to port. The emu guys have amazing stats in that regard.