Doesn't the CPU/GPU bottleneck which is already assumed to be slow actually provide the perfect opportunity for abstraction over a network protocol? Sending "what to draw" and "how" (shaders) over the wire infrequently and issuing cheap draw commands on demand? I think GPUs provide a better situation for a network first model than was available when X was designed.
Only if everyone agrees on a central rendering model & feature set, which simply isn't the case. 2D rendering is not a solved problem, there are many different takes on it. Insisting all GUIs on a given device use a single system is simply not realistic, which is why nobody actually uses any of the X rendering commands other than "draw pixmap" (aka, just be a dumb compositor)
Sorry, but I can’t take the suggestion that the PCI bus and the internet should be treated the same seriously. You’re telling 4 or 5 orders of magnitude difference. Maybe more on some specs.
It’s like saying you should use the same file access algorithms for a RAM disk and punch cards. No you shouldn’t!
> Xorgs decades of highly dubious technical decisions.
People like to say this, yet time and time again, X's design proves to be the superior one in the real world.
Some of it could use minor revisions (adding some image compression would be fine, etc), but it is hard to seriously say things are "highly dubious" compared to the competition.
For one, "network first" for a GUI is not a sane model, and that is only more and more true as more "compute" is pushed onto the GPU.