I would use Wayland, but the most actively-developed tiling window manager, sway, doesn't work with the NVIDIA graphics card I own.
And come to think of it, isn't that a really weird problem to have? On X, the compositor (responsible for actually drawing all the windows on the screen) and the window manager (responsible for deciding how to arrange the windows and what their title bars/borders should look like) can be separate components, so I use i3 to arrange my windows and compton to draw them without tearing.
On Wayland, my impression is that the compositor and the window manager have to be built into the same program, so you run into silly situations where you can run GNOME on your graphics card but you can't run a basic tiling window manager. This also makes it a lot harder to create a new window manager, since you also have to write a compositor and test it on every graphics card.
That last issue could be solved with a reusable library that provided basic compositor functionality for window managers. Except every member of the Wayland community has independently had that idea and written their own, each with a different, incompatible interface and support for different graphics cards. So either we need a meta-library that abstracts away all the different libraries, or we need a standard compositor interface like X had.
The problem is that wlroots is one of several libraries with this goal.
Say I wanted to roll up my sleeves and implement NVIDIA's proprietary interface myself. (Or say in a few years we come up with some new, even better way to allocate buffers or whatever.) On X, I can write a compositor that uses that interface, and it'll work with basically every window manager written since the early 90's.
On Wayland, I could patch wlroots, but then I'll only be able to use window managers based on wlroots. If I want to use a different window manager, I'll need to patch a different library. And depending on how stable wlroots's interface is, I might need to maintain patches for old versions of it depending on how maintained my preferred window managers are.
The Wayland situation is probably fine if you only ever want to use the latest version of the most popular two or three desktop environments. But speaking personally, if I wanted to do that, I'd just run macOS.
>The problem is that wlroots is one of several libraries with this goal.
The only successful one, to be clear. You would be unwise to base your compositor on any of the others (libweston, wlc, swc, etc). wlroots is suitable for basically any use-case.
A downstream compositor which wishes to implement Nvidia support could do so and integrate it with wlroots.
However, there are more reasons than political not to "support" Nvidia. Their alternative has genuine technical problems that would render large parts of wlroots broken.
> Their alternative has genuine technical problems that would render large parts of wlroots broken.
If this is indeed true (and I have no reason to doubt you) then I wish you and the rest of the Sway folks would focus on those issues, instead of the political arguments because from the outside it all looks very petty when you write stuff like "Nvidia doesn't support Sway". Nvidia doesn't even know about you.
I've personally spoken at length with Nvidia graphics driver developers. They know who we are and they know the technical problems. If you have no reason to doubt me, then perhaps you shouldn't.
Are the technical problems documented anywhere where non-NVIDIA-driver-developers can learn about them?
Most of the discussion I've read online argues that to support NVIDIA's solution, everyone would need to add if(nvidia) checks to their code and maintain them forever. But this argument only really holds in Wayland's bizarro-world where all programs that arrange windows must also bake in code that interfaces directly with hardware to draw them on the screen.
Absent any other technical problems, it's hard not to conclude that this is just an attempt to use politics to cover up one of Wayland's poor design choices.
> Most of the discussion I've read online argues that to support NVIDIA's solution, everyone would need to add if(nvidia) checks to their code and maintain them forever.
This is a severe misunderstanding of NVIDIA's solution.
NVIDIA's solutions is so fundamentally different that you'd basically maintain two compositors internally: One that handles every sane driver, and one that handles NVIDIA.
Nothing in wayland is incompatible with NVIDIA's approach. GNOME supports it, and KDE recently gained support. However, the NVIDIA approach exists for only one reason: Because NVIDIA does not want to play along with the open source world, and doing the least amount of work to their proprietary driver.
Unless NVIDIA turns their ship around, supporting NVIDIA is harmful to our community. Although, if they'd just keep up with our standardized APIs, maybe we could somehow accept their ridiculous proprietary drivers which has no place in this decade.
> NVIDIA's solutions is so fundamentally different that you'd basically maintain two compositors internally: One that handles every sane driver, and one that handles NVIDIA.
If Wayland had a standard compositor interface like X has, this wouldn't be a problem at all, would it? You could write your compositor that talks to the hardware using your "sane" interface, and someone else could write a compositor that uses the NVIDIA interface, and users could pick the compositor that works for them and use it with any window manager they choose.
It's only because of Wayland's design that "GPU manufacturer creates nonstandard interface to their driver" is some giant, existential, ecosystem-fragmenting threat to the open-source community.
X11 has a "standard compositor interface", but X11 compositors generally don't talk to the hardware directly, they talk to standard X11 drawing APIs like OpenGL or XRender. What you're talking about is really "Xorg drivers", a legacy from the time when Xorg was responsible for display hardware, and the OS kernel was responsible for every other kind of hardware.
Hardware definitely needs drivers, but these days the Linux community has settled on the model that all drivers should live in the Linux kernel, where they can share common code like "talking to the PCI bus" and security infrastructure. What you're proposing is that, instead of having one driver layer from "standard API" to "hardware interface", there should be multiple redundant driver layers, so that NVIDIA can implement whichever layer optimally balances their desire to support customers with their desire to hide their intellectual property.
That's not a crazy position (NVIDIA certainly holds it, and they're no small fry!) but telling Libre graphics-stack people "you need to make your architecture more complex and do extra work for free, to make this multi-billion-dollar company feel more comfortable" is always going to be a difficult proposition.
> It's only because of Wayland's design that "GPU manufacturer creates nonstandard interface to their driver" is some giant, existential, ecosystem-fragmenting threat to the open-source community.
That's not how I think of it. By analogy, consider POSIX. Every major operating system supports the POSIX interface--all of them, except one that is. But no one goes around calling that one OS a "giant, existential, ecosystem-fragmenting threat to the open-source community."
Essentially, you have to pick a point somewhere in the stack where you define the common interface between the different hardware vendors. The problem is that one hardware vendor, because it's the 800 lb. gorilla who loves throwing its weight around, decided to not implement the common interface, hoping that its weight will force everyone else to adapt. Wayland is unfortunate enough to be in a position where it relies on that common interface being supported, but this isn't the only place that NVidia has decided to try to scuttle cross-vendor compatibility (GPGPU computing is a very notorious case).
I'm not sure what victory is being declared here: is Wayland likely to run on more than a tiny fraction of devices any time in the near future? If not, that "one OS" can take a large part of the blame.
> If Wayland had a standard compositor interface like X has, this wouldn't be a problem at all, would it?
It does. It's called Wayland.
I suspect you might instead be thinking of a standard window manager interface to plug into a compositor, rather than the other way around.
The Wayland approach has been to just make it easier to write a compositor, instead of porting X's complexities and odd design choices.
However, nothing stops you from making a compositor with such an interface. It won't be part of Wayland, but it would be its own spec.
However, this suggestion would be a case of symptomatic treatment. The root cause of complexity is NVIDIA.
> It's only because of Wayland's design
This is not true. It just because Wayland is not X, and NVIDIA only has support for X.
Under X, every driver had its own integration, which was a pain to make, maintain and use.
For Wayland, the community got together to make a generic API that had nothing to do with Wayland, allowing applications (be it a wayland compositor or something else) to not have to think about what they're running on.
NVIDIA, however, is just trying to throw more duct-tape, taking us back to the X era.
You may ask "Why don't everyone just implement EGLStream?" - that is because EGLStream has no merit, and is just the implementation that would be easiest for NVIDIA to implement. Why would we make everything else worse, spending significant time and effort redesigning everything else, all to save NVIDIA a few bucks?
If they have actual concerns, we can solve it as a community, but considering that everyone invested significant time in this project, we are not going to throw it all away and pick a vastly inferior model just so that NVIDIA doesn't have to do anything. NVIDIA is just one GPU manufacturer out of many, and on Linux, a small one.
As for EGLStream problems: Performance and stability are the big ones. Ironically, EGLStream is bad for gaming, as we cannot do direct scanout with it (an optimization where full-screen content gets thrown directly to screen instead of going through compositing).
This would be a great idea if NVIDIA was open to suggestions so that things could be fixed.
However, they mostly refuse to play ball. They did seem to be preparing an alternative suggestion, but they seem to have dropped it entirely in favor of just dumping EGLStream patches on KDE and GNOME, hoping every one else will be forced to follow.
The alternate suggestion would work like EGLStreams. The current Nvidia driver can’t handle any more.
EGLStreams works by having children pass ownership of the surface they want to display to their parent for compositing. This invalidates said surface.
GBM is a coherent model. Handles are passed around to prevent the need for copying, but ownership isn’t transferred.
Nvidia’s driver is a black box, but I don’t think they can do things that way without some significant driver refactoring. Obviously that’s costly, so they’re trying to avoid doing it.
AMD took a gamble when they began their open source initiative. It’s starting to pay off now with stuff like Stadia and the Valve stuff. I feel like Nvidia is going to have to do an open driver eventually. I hope they’ve got at least something in the works that we don’t know about.
I’m not going to suggest replacing Nvidia cards if they’re working fine for what you do. Just look at AMD (or Intel next year) if you start hunting for your next card. Don’t fall into the brand loyalty pit.
NVIDIA writes and maintains their own Xorg graphics driver on top of their kernel driver. NVIDIA is apparently not interested in providing the same courtesy to wlroots.
(I don't know how much NVIDIA has contributed to GNOME and KDE's Wayland compositors)
GBM is a very useful standard if you only ever want to request buffers from drivers that use the Linux kernel's GPU buffer management code and modesetting code. (Preferably only Mesa-based drivers too.) NVidia doesn't use this and probably can't for licensing and other reasons. If you want to talk to any graphics driver that isn't correctly and intimately entwined with the right parts of the Linux kernel, GBM is basically useless. It is not in any way, shape, or form a generic standard for buffer management.
(In principle GBM isn't quite a single-implementation Mesa only standard - third party implementations are possible, though the only one that exists right now is by ARM for their newer Mali GPUs. People seem to have had mixed results with it and I'm not sure it's even intended to run desktop Wayland.)
I'm not super familiar with the details of this stuff -- does this mean that there's basically no hope of any wlroots-based window manager ever working on non-Linux operating systems like BSD?
Most of the BSD variants have some (generally outdated) port of the Linux kernel graphics stack, sometimes even with a wrapper layer to try and make the BSD kernel internals look enough like Linux for it to run unmodified. So it's probably not completely hopeless, but that's mainly because the best shot at getting the graphics accceleration it needs is a straight port of the Linux kernel drivers. (At least for non-NVidia users.)
Intel and AMD maintain open source drivers that have supported this for ages. It's only NVIDIA that still tries to shovel outdated proprietary drivers without support for community developed standards down our throats.
Adding that their proprietary drivers are not particularly good, NVIDIA is not a very popular choice for a Linux machine.
Just remember that it took them almost a decade to add KMS support.
It's an interesting position to suggest that nvidia which has no obligation to support you in any way shape or form is shoveling anything down your gullet by not supporting the standards you prefer.
You could vote with your wallet but there aren't enough Linux users to move anyone's needles as far as gpus.
> but there aren't enough Linux users to move anyone's needles as far as gpus.
Well, for server and ML payloads, we are the vast majority. Things like Google Stadia is certainly enough to move needles, and if AMD ends up able to compete in ML with future products, then we'd be able to make a huge dent in NVIDIA revenue.
NVidia has a different problem too. Every time I want to install one of their cards for computing purposes only (not graphics), the installation procedure starts messing with my display system. Drives me crazy.
Nvidia has excellent support, I think you are conflating Linux with open source. For decades, Nvidia had (and arguably still has) excellent support for the former, without really caring for the latter.
Are you referring to the small decade it took them to support KMS so that their driver behaved even remotely like a modern one?
Or perhaps their marvelous installation methods of running a random script as root that rewrites configuration files, and its configuration interface that likewise also rewrites configuration files in attempts to get multi-monitor setups working that until recently hardly ever worked?
Maybe you are referring to their magnificent support, forcing you to stay on outdated kernels as upgrading would break compatibility and render you without a functioning graphics adapter short of a VGA-resolution framebuffer console?
It could also be their fantastic backwards compatibility, requiring you to keep track of driver series compatible with certain adapters, where every other GPU in existence just works OOB.
I used NVIDIA up until a 2 years back. While you could arguably get things to work, claiming they had excellent support is laughable at best.
I think he refers to the fact that Nvidia's drivers provided excellent OpenGL support for multiple years compared to the absolute dumpsterfire that fglrx was. Before Valve decided to pay attention to Linux and thus prod AMD to improve their drivers, if you wanted anything approaching serious 3D performance on Linux, you had to use Nvidia. Anything else would be a waste of money.
Doing what everyone else does != doing it properly
> I wouldn't call this excellent at all
So far Wayland support is most further in Gnome and even there a lot of features are still missing/broken despite Wayland being well over a decade old at this point. A lot of very basic features (remote desktop, screen sharing, exclusive fullscreen, keyboard/mouse shortcuts) are still in their infancy. Not to mention both Gnome and KDE have implemented their Wayland compositor inside the shell process so now a compositor crash means you lose all your open apps which hasn't been a problem in Linux for decades.
> A lot of very basic features (remote desktop, screen sharing, exclusive fullscreen, keyboard/mouse shortcuts) are still in their infancy.
- remote desktop: good point. Considering that screen recordings work fine, I don't see a technical reason that a Wayland remote desktop setup couldn't work.
- exclusive fullscreen: what do you mean by this? Using Firefox, F11 and Super-F both work in Sway and F11 works with GNOME.
- keyboard/mouse shortcuts: maybe? The fact that a random process can no longer read all of your keystrokes seems like a plus, to me. Otherwise, just add a shortcut to your window manager or DE and run a command of your choice.
- losing all apps on compositor restart: Only an issue with GNOME and KDE. Pressing Super-Shift-C on Sway reloads the compositor and not your apps.
> The fact that a random process can no longer read all of your keystrokes seems like a plus
It is a plus, unless you write software that allows for global keyboard/mouse shortcuts (which I have done). In which case it is just a huge pain in the ass to not have it, and then hearing from some developers that you can just "add a shortcut to your window manager" is incredibly frustrating. It's not like you can trust an average end-user to actually do so, even the ones that do run linux. Then you will get complained at for not having functionality that existed at some point in the past.
Exclusive fullscreen = application runs directly on the screenbuffer, bypassing the compositor. Not just that a window happens to be taking up the whole screen.
Well--no, not in the same way, but it can still be a hint passed along somewhere, to allow the compositor to swap out its root framebuffer for the program's.
Screen sharing is quite important in my workflow. Fellow developers and customers will send me buying Windows or a Mac if all screen sharing applications on Linux stop working. I'm using Slack and Meet. Skype is almost abandoned among my customers.
I would like to see the approach your pronning of "not doing like others" applied to some other protocols like TCP. Networking would be really fun and Internet a great success. The very concept of a protocol is that everybody does the same, the underlying implementation is different.
Wayland and its current limitations have nothing to do with, and does not excuse neither, nvidia's non-compliant implementation.
Funnily enough, this is exactly what I was talking about. Nvidia supported multiple monitors on Linux for decades using TwinView, they simply didn't add support for the open source xrandr for 4-5 years, but even that has been supported for ~7 years now.
> Nvidia supported multiple monitors on Linux for decades using TwinView
And Xorg supported them for decades before using Xinerama. Instead of collaborating with upstream, they just dumped something incompatible and broken, and said "Deal with it".
Multi monitor display worked perfectly for long time, long before "it works on intel, let's call it new standard" crew decided that flat shared framebuffer is the only way to go (apparently because of compositors and nothing else), something that also broke support for multi-gpu and especially heterogenous multi-gpu.
Zaphod mode is still available. nVidia pushed its TwinView solution while everybody else was using either the Zaphod mode or Xinerama. I don't think TwinView on Linux allowed any dynamic configuration. So, when the rest of the world switched to the RandR extension, every graphic card was able to dynamically add/remove monitors while nVidia users were stuck with a subpar solution.
But I mean what's the point of declaring something to be a standard if the important people weren't interested in implementing and supporting it?
Anyone can come up with a standard in isolation. Getting the relevant people on board and able and willing to support it is the useful bit.
Did they say they'd support it? If they never said they did it seems unfair to criticise. Are you going to support my graphics standard that I just made up?
Intel and AMD cares about it. If the Linux desktop market would grow, Nvidia would quickly change their mind.
Now, this market growing is another question...
But for me, nvidia gpu are just not a possibility because they lack good drivers on Linux.
So how do you want to grow Linux market share if you say FU to a significant part of the potential market? Steam hardware survey is pretty clear about AMD:Nvidia GPU ratio.
I don't think so - I see nobody with a desktop computer these days except gamers or people running CAD or modelling who are basically doing the same thing as running a game.
You can think of any number of fantasy scenarios that would make one of the biggest GPU vendors care about your standards, but sadly that is not the world we live in. The world we live in is one where Nvidia has the best game support on the market (far, far better than AMD - let's not even think about Intel here) and therefore anyone who still plays games on a PC will own one of them unless they are some sort of AMD advocate.
At the end of the day, nobody wants to be reading "we don't support this because they're not nice to us". That's going straight back to the linux dark ages of 15 years ago, where you needed to care about all sorts of weird and arcane details to get a functioning desktop system.
It's also significant portion of "people who pay for Linux developer salaries" in the form of few workstation users that use Linux-based software that is often paired with Quadros.
Wherein "death watch" is expected to work without difficulty for the next decade or longer.
They aren't pretending they support Linux, BSD, Solaris, Mac, Windows for a decade after each card is released.
While this was true ATI/amd were shipping garbage that barely worked for a few years. This has only changed in recent years.
We are a few years having one dedicated gpu maker in the fold and are already talking about using your massive 2% marketshare to strong arm the other. You could afford to be more humble and less entitled.
It really doesn't matter to me if Nvidia drops out of the linux desktop market or not. I've never owned their hardware and never will, so it makes not an iota of difference to me.
I'm just observing that Redhat is the trendsetter and if they say X is legacy, that makes it so. Unless Nvidia or Nvidia customers decide to pick up where Redhat is leaving off and pay for developers to work on X, but I sincerely doubt that's going to happen.
Unless Nvidia decides to support the new system, they can't plausibly claim to support the linux desktop. I just hope whatever happens will result in less online whining from linux-using Nvidia customers.
I think having a substantial chunk of potential machines no longer work would be a meaningful difference to the overall actual and potential user base regardless of how you feel personally. I think we should therefore act in everyone's interests.
Who's going to put up the money? Not Redhat, they don't want to pay for it anymore. Will the nvidia users? Will they be willing to pay for their card twice? Will Nvidia pay for the continued development of X, when they already seem to loath spending money on linux?
If nobody picks up the slack, the paths ahead seems pretty clear. Either Nvidia produces a proper driver, or their support for the Linux desktop can be classified as legacy at best. Feeling upset about this situation doesn't change the nature of it.
This is exactly the kind of reasoning that has always stood in the way of real people using Linux. It makes me sad that we've come a really long way towards taking that mindset out of the OS and we seem to be moving back to it again.
You may not like my reasoning, but how is my reasoning wrong? Somebody needs to pick up the slack now that Redhat has no interest in funding X. All you're doing is protesting that you don't appreciate this situation, but that's irrelevant.
It doesn't matter who the ones "suffering" are. What do you expect to do, convince me of the moral necessity of supporting nvidia cards so that I turn back time and devote my life to reverse engineering GPUs and implementing FOSS drivers? That can't happen. I don't have the power to support nvidia cards, only Nvidia has that power, and they seem to have no interest in exercising it. It doesn't matter if you convince me that nvidia card support is more important that curing child cancer; the situation remains unchanged. To change the situation you must convince nvidia, complaining to anybody else about it is a waste of your time.
The point I was trying to make is that your end users are fundamentally the ones suffering. You are not suffering, because you avoid the problem by throwing money at it. Nvidia is not suffering, because you as a community don't matter to their wallet.
The only people who suffer are the people we write software for. Maybe that doesn't matter to you, because you write software only for people like you. That's fair. It matters to me though.
Nouveau drivers are pretty terrible (performance-wise) and didn't support the card that I am using last time I could find any real info about it (admittedly more than a year ago, I use a GTX 1070). This uncertainty is a fact of your life if you use nouveau, which is precisely not what I'm looking for on my main workhorse machine.
This is leaving aside all of the trouble that you have with running Nvidia's ML stuff when you don't use their own driver (impossible, AFAICT).
Both of those things are blocking issues for me, which in turn means that using anything that doesn't support regular nvidia drivers is a no go.
I know it's not the devopers fault that they can't reverse engineer nvidias closed source crap but nouveau has always been garbage. Nvidia should adopt and fix it or at least help.
You can see why Nouveau devs are having difficulties with Nvidia in a Xorg developer confereence talk[1].
1. Relations with NvidiaNVIDIA changes prevent us from releasing a driver
* Signed firmwares accessible publicly but not redistributable
* Reverse engineering of vbios impossible
2. Communication mostly down
* Main contact/dev left NVIDIA (Alexandre Courbot)
* Most important requests left unanswered...
* ... until more complete code than wanted lands publicly innvgpuweeks later
Probably not possible (or at least not trivial at all) due to third-party licensing agreements. AMD/ATI didn't open source their closed driver but instead supported the open source one.
> That last issue could be solved with a reusable library that provided basic compositor functionality for window managers. Except every member of the Wayland community has independently had that idea and written their own
Actually the opposite is true.
Recently everyone is converging on re-using the same compositor library, written by among others sway’s author: wlroots.
I would love to use Wayland, currently I only use it to watch movies without tearing. Anecdotally, everything feels brittle when using Wayland, applications crash... I really hope it matures.
Which video card/driver do you use? I have used Wayland for two years now without any noticeable problems. Before that, bugs in GNOME would often bring the whole graphics stack down.
(This is on amdgpu and later intel. I had a lot of problems with nouveau.)
It's some AMD Radeon with 8G so reasonably beefy. Not sure about the driver, I choose AMD for Linux graphics to not have to think about that I suppose, Nvidia used to be (still is?) a headache on Linux.
>Nvidia used to be (still is?) a headache on Linux.
AMD was worse, the Linux driver had like half the performance of the Windows one. So if you were asking who has better Linux drivers for Linux the answer was NVIDIA (for desktops not latops)
For a bunch of stupid, hysterical reasons, the Open Source X11 server still has all the warts of early-1990s release. XFree86 used the MIT X.Org core dump to implement open X11 on Linux, but outside of certain new things (DRI, XRender extension) it remained stuck in "world is dumb framebuffers with optional bitter" design.
Over time, a lot of useful stuff was broken and supported less and less, then reimplemented in bad way over compositors, instead of having composting Xserver like Xsgi.
Security extensions or even basic security tools (private input grab) got waylaid because toolkits didn't implement them (try running GTK3 on hardware without 32bit color GL or on system that defaults to lower bit depth than 32/24).
Generally, after letting it get more and more broken, the suggestion for big rewrite arrived. Said suggestion often results in something that resembles "devs first graphic stack", with a lot of basic, important features for daily life relegated to "someone will make an extension for it later" or "use D-bus to implement it elsewhere, and who cares about compatibility".
X.org is more horrible and also impossible to maintain. It would have been given the boot over a decade ago except for the fact that AMD/then ATI and Nvidia only supported X and video drivers were incorporated into X directly. There was a big push starting in ~2008 where AMD started open sourcing their drivers. Infrastructure started developing in the kernel where it makes sense instead of in X.
Now we've reached the point where there are almost two parallel stacks - linux's graphics infrastructure; and Nvidia's. The developers don't have any patience for the one holdout against the new system. It is unfortunate for the people buying Nvidia cards because they are now starting to be labeled as 'using the legacy systems'. Presumably there is some reason why they have no choice in their purchases, but the cost of maintaining systems to work with closed source drivers appears to be too high. My views here aren't exactly current - I've stayed a long way away from Nvidia since ~2009 - but they are probably sill somewhat on target.
It is likely that a better designed alternative to Wayland will crop up. It is much easier to compete with than X - there are no drivers that only support Wayland.
Are there any decent high end laptops that use AMD? I haven't seen anything. Not Lenovo. Not Dell. Not system 76 or any other Linux vendors. I prefer one too considering what shit nvidia drivers are. So the Redhat/wayland people are essentially saying all powerful laptops that have a chance of running Linux are already obsolete even ones not released yet? Redhat is powerful but not powerful enough to decide that. Fuck wayland if they refuse to support virtually the entire market for pro laptops. It certainly isn't the future without that. That's just delusional.
Yeah that's what I suspected. Unfortunately mbpros have many other issues with Linux, if they even have drivers at all for the new ones. As far as nvidia they won't budge so Redhat going in on a technology that isn't supported by a large majority of its user base seems a failing on their part to me. Then again, I don't think they care much about desktop Linux so I can see where they are coming from focusing on server products and the minimal ui needed to occasionally run a gui rather than something one could use every day like os x or even windows. So much for the Linux desktop.
> Every GPU vendor but Nvidia supports these APIs.
> About a year ago Nvidia announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before.
That’s pretty damning for Nvidia. I can see why the developer is so upset. Must be tough working on OSS in that type of hardware environment.
Looks like AMD is the way to go for desktop Linux boxes going forward. Although it looks like both Dell XPS [1] and System76 [2] is still using NVIDIA. Purism Librem uses the Intel i7’s embedded GPU [3]. I’m curious what the ideal Linux laptop is these days…
The ideal Linux laptop is one with just Intel integrated, and no discrete GPU. Honestly, Intel GPUs are decent enough, even if you want to do some mild gaming. Handing two GPUs on a laptop in Linux doesn't have the best support. A lot of the Optimus/Bumblebee stuff is not very well maintained.
True, I don't do any laptop gaming. Intel GPU's seem to be sufficient as long as it can easily power a full HiDPI internal monitor + 4k external monitor... which Librem's Intel 620 GPU is listed as supporting but it looks like Librem's 15 v4 only has an HDMI port and not usb-c/displayport, which limits it to 4096x2160@30Hz instead of 4096x2304@60Hz. So I guess you'd have to look at Lenovo, HP etc for top tier..
Sadly my work uses all Macbrook Pros, which are amazing machines), but I'm scared to install Archlinux on them. The Archlinux Macbook wiki pages are 4yrs+ old.
Don't overestimate the Intel GPUs. My current system uses a 530 which is getting pushed to its limits trying to render the desktop at 60 Hz, either UHD on AC or FHD on battery. Forget UHD on battery, that doesn't even manage to move the cursor at more than 10 fps.
IMO the ideal GPU for a UHD screen would be either an Iris or a Vega 10. But it's difficult to find the former outside MacBooks and the latter with more than an FHD screen.
Just to back this up, I've recently bought a non-touchbar 13" MBP from 2016, and been amazed at the amount of games I've been able to play no issues on the Intel graphics in it.
Can you list some of them? Since my MBP with much faster Radeon 460 is struggling even with new isometric games like Pillars of Eternity 2 or games like Stellaris.
> Looks like AMD is the way to go for desktop Linux boxes going forward.
Problem is: desktop Linux has an irrelevant market share, and the trivial workaround (use intel IGP or go buy and AMD graphics card) do not work when you don't just manage your one PC but actually maintain a system that is produced in series...
And honestly being upset at nVidia leads nowhere. The architecture of Wayland is complete bullshit to begin with and I'm not sure any other graphic stack of competing systems went that way, EVEN for systems that have a single window manager (well, that's most of other systems... -- so the situation is strangely kind of reversed). So hopefully wlroots will eventually improve the situation or something like that, but is it even convenient to build something that does not link the graphics session to the low level graphics stack with that? I hope it is, because if not, that is hopeless... :/
And I've not even talked about extra deps like Cuda, or the general lead nVidia still have in perf/W and/or perf/price (IMO AMD is close, but no cigar)
The X proto was maybe shit, but at least the general approach did not lead to the kind of insanity we are currently seeing. And people discussing the Death of X when Wayland ecosystem is maybe still 5 years away from decent quality and some reasonable feature completeness... (remoting, anybody? anything good enough for Wine?)
It wasn’t just Wayland that used GBM, plus every other vendor supports it but Nvidia and it’s well documented in Mesa.
But I personally don’t care as long as Intel and AMD are available, it’s fine. I see Linux like MacOS, it’s an operating system targeted at a limited set of hardware. As long as you buy within this group 90%+ of the problems with Desktop Linux will go away.
The HP x360 with a Ryzen 2500u is rather good, and can be had for ~$400 on eBay.
It did take a full year for all the bugs to be sorted out (random kernel panics & freezing), these same bugs were problematic in Windows 10 tho. Neither OS had good platform support for quite some time...
Last I checked, sway detects if nvidia.ko is loaded (even if the nvidia gpu has no display) and errors out with some message to the effect of "f you for buying nvidia."
If this check were not here, Nvidia would still not work. This serves to reduce our bug report volume. However, by deleting these few lines of code (or specifying the appropriate command line argument), Nvidia could work tomorrow if they shipped GBM support.
The flag is there for you to override if you think you know better. We don't answer questions or provide support for any use-case with the proprietary driver.
Refusing to run if a particular kernel module is loaded (even if you don't touch it at all) is overstepping. You should at least provide an override flag.
It's implying I have a choice in the accelerators I could rent in a cluster... Yes, it's about using an intel GPU for display and a nvidia GPU to provide local development (latency being the primary reason).
It's just generally taking a reasonable stand ("I don't want to spend my time on Nvidia, especially not for free") into an unreasonable one ("Users who have Nvidia are bad and should be bullied").
> I would use Wayland, but the most actively-developed tiling window manager, sway, doesn't work with the NVIDIA graphics card I own.
May I suggest you buy an AMD GPU next time around; not only because they didn't try to impose their "standard" (which virtually nobody other than them implements, as far as I can tell) without implementing the de-facto standard interface, hurting their customers' access to Wayland to this day... but also because AMD will probably be better value next time you're in the market for a GPU.
> That last issue could be solved with a reusable library that provided basic compositor functionality for window managers.
Sway is based on such a library! (wlroots) However, again, NVIDIA has made it an extra effort to support their GPUs (both in applications and in compositors), and I'm not sure anyone wants to go to extra effort to support a vendor who actively impedes development. By all means, if you want to do NVIDIA's job, be my guest.
I think Sir_Cmpwn has a point in not putting in that effort though.
> May I suggest you buy an AMD GPU next time around
This doesn't help anyone. I bought an expensive Nvidia GPU 6 years ago, when AMD support on Linux was really bad. What do I do, throw out a perfectly good powerful graphics card and spend another $500 to satisfy the needs of some newfangled software which doesn't solve any of my problems?
I've had 6 years to experience the horrors of Nvidia, on multiple machines. I'm never going to buy one of their cards again unless there's a massive change in their leadership. But telling people 'tough luck; basic functionality that your computers have been capable of doing for decades will no longer work because you bought the wrong brand' is simply unacceptable.
> What do I do, throw out a perfectly good powerful graphics card and spend another $500 to satisfy the needs of some newfangled software which doesn't solve any of my problems?
If you want to replace a card which was expensive six years ago, you can spend a hell of a lot less than five hundred dollars to do so. Even a brand-new RX 570, which will outperform a GTX 770 (the highest performance GPU for $500USD or less in 2013), should only cost you about $130USD today. If you're okay with pre-owned cards, you can get better than that for the same money.
> This doesn't help anyone.
It helps people who are in the market for a new GPU, who want it to work well with Linux, especially the cutting-edge stuff.
Even in 2013 though, I think I was able to run Wayland on a discrete AMD GPU (Terascale 2 or 3?) with at least OpenGL 3.3 available; and it worked OK. I don't think AMD got that working more recently than 2013.
I can understand not wanting AMD's cards on the top end as ca. 2013 (since they were not competitive, IIRC), but now they do have competitive offerings, and their support for desktop Linux is incomparably better, which is why I suggest people go with them.
1 the expensive(this is relative) card works fine, do I throw it or donate it without any good reason(switching to wayland could wait IMO until this card is too obsolete)
2 Buying a new GPU is a big investment for some people,those money could be used to replace an old low quality display or other stuff.
No, I'm not missing these points. It's just that nobody except NVIDIA can do anything about that, so I'm suggesting things you could do. If you can't get an AMD or Intel or Qualcomm (unofficial) or Broadcom (or soon ARM) GPU, and you won't (or can't, again because of NVIDIA being antisocial) run Nouveau instead of the NVIDIA proprietary drivers, and you're not willing to port wlroots to EGLStreams, then you simply will not be running wlroots-based compositors on your machine. That's just how it is.
If you bought a 2019 MBP hoping to run Linux with full support for all the hardware, you would be making a mistake; just because I say that it's a mistake, doesn't mean I think you can afford to replace all your computers again once you've made that mistake.
I don't get these replies. I go to specific effort to couch everything I say, I say "next time around", I say "if possible"; and everyone reads it as though I'm somehow out of touch with the fact that not everyone can spend hundreds of dollars on GPUs every year. I know, I'm just saying that steering clear of NVIDIA in the future is an option that will give you a good experience with standard Wayland compositors.
OK, there are a group of people where at the time we bought NVIDIA it was not a mistake, it was the better card for Linux Desktop, at that time AMD open source GPU was bad and the Catalyst proprietary one was terrible in performance, Steam games were only supporting NVIDIA.
Sure when I will need to buy a new card I will again have to check what is the best one for Linux and then I will probably get an AMD card again( I had AMD before NVIDIA on desktop and Laptop and back then it was a big disappointment, including the fact the AMD Catalyst drop support for my 1 year old cheap Laptop)
> AMD Catalyst drop support for my 1 year old cheap Laptop
My advice is: Unless you're running a qualified application on a workstation GPU, do not use Catalyst (or whatever they're calling it this year). It is almost always more finnicky. Mesa has the best performance, and the most meaningful features for OpenGL, Vulkan, and D3D9 for any any GCN 1 or later (and probably r600 or later, but I don't have personal experience with that) GPU. AMD staffs people to maintain Mesa, including for GPUs they do not support their proprietary driver on; and if you're running a fresh/rolling-release distro, you'll tend to have a great out-of-the-box experience with Mesa.
Thanks for the response, my story about Catalyst was very old. I was trying to explain to some new Linux people that AMD was terrible a few year back. So I bought a new laptop and in 1 year Catalyst dropped support for the GPU in that laptop forcing me to only run distributions with old enough Xorg because the open source drivers were bad at that time.
I know that now AMD open source is better and my new computer will probably run an AMD CPU and an AMD GPU but until my current PC gets outdated I will have to continue use my NVIDIA 970 which is still a decent card for cashual gaming
I've never experienced those supposed horrors across multiple operating systems (FreeBSD, Linux, macOS and Windows 7/8/10), the only time I owned an AMD graphics card it was a total disaster, among other things I remember are really bad Linux drivers (proprietary or otherwise). I never had any issues with my current GTX 1080, 4xGTX 2080 workstations or the 2xV100 nodes I have access to.
> I've never experienced those supposed horrors across multiple operating systems (FreeBSD, Linux, macOS and Windows 7/8/10)
You've completely missed the point. The horrors are in compatibility. NVIDIA chooses to make it difficult for their customers to use their hardware with new applications, and they hold back the whole ecosystem. In addition to the topic at hand (Wayland), their release model holds back Linux kernel releases when their driver becomes incompatible with the upstream kernel because it is not maintained there.
If you just want to run whatever boring applications NVIDIA chooses to allow you to run, congratulations! Go enjoy your "experience", and play it "the way it's meant to be played"! :- )
> among other things I remember are really bad Linux drivers (proprietary or otherwise)
> By all means, if you want to do NVIDIA's job, be my guest.
If I wrote an X compositor that supported NVIDIA's proprietary interface, it would work with pretty much every window manager that's ever been written.
If I patch wlroots to support NVIDIA, I'll still only be able to use window managers based on wlroots; I won't be able to use ones that spun the Wheel of Incompatible Wayland Compositor Libraries and landed on Weston, swc, Orbment, or whatever GNOME uses instead.
This is why it was so infuriating when NVIDIA refused to implement the stanadrd interface: they made the ecosystem "fragmented" for their users, and the PR cost of that bears on the compositor writers, rather than NVIDIA.
Having different compositors at the level they're at allows a lot more interesting innovations to be made, and aside from NVIDIA's tomfoolery, it has not caused significant inconsistency. Everyone uses libinput, almost everyone uses colord, and everyone implements GBM.
Wayland is substantially different from X, and the ways in which it is different are good. NVIDIA did the one thing we needed people not to do.
Have you considered that maybe it is the other way around: The standardised solution is inferior and NVIDIAs superior? Given the fact NVIDIA engineers probably know a lot more about modern graphics stacks than some Redhat employee this seems likely.
Some NVIDIA engineers probably know a lot about graphics stacks; charitably I'd say that those ones aren't the ones tasked with insisting on EGLStreams, which lacks a considerable number of crucial features.
To be honest though, NVIDIA engineers do not spend any time engineering graphics stacks at all; they write graphics drivers for stacks that are already largely specified. They're really good at shipping hundreds of megabytes of dirty hack code to patch popular shaders from ISVs to run slightly faster on their hardware, and probably great at writing shader compilers and threaded GL drivers.
Note the first comment there shows that their EGLStreams implementation shipped with vsync broken. EGLStreams seems attractive to NVIDIA because it was easy to implement (incorrectly). The features they talk about enabling on Tegra are not impossible with GBM, it just may involve adding a feature.
I recommend you simply to switch to AMD if you are using Linux and want a high end GPU. That's the only way for you to get a usable desktop with modern features.
Or wait until Intel will release a new GPU next year, it should work well with upstream too.
Nvidia has no interest in making their cards play well with Linux, so just avoid them completely to keep your sanity and don't waste your time trying to work around their mess.
Going forward, DE / compositor developers will support Nvidia use case less and less (due to Nvidia market share taking a further dip on Linux and Nvidia still not upstreaming their driver). So save yourself the trouble already today.
> I recommend you simply to switch to AMD if you are using Linux and want a high end GPU. That's the only way for you to get a usable desktop with modern features.
This is just not true. Gnome with Wayland is one of the most popular and modern DEs and it supports Nvidia's proprietary drivers today.
OP explicitly mentioned Sway (based on wlroots), not Gnome (mutter). Besides, even Gnome can't handle XWayland with Nvidia today. And if you play games, especially in Wine, that's still a requirement.
While Gnome developers bent under Nvidia's pressure, other compositor developers aren't interested in accommodating exceptions for blobs and wasting their resources on that.
TL;DR: just ditch Nvidia and forget about all their horrors. They really have no place in Linux ecosystem today, because of their refusal to upstream their drivers and because of preventing Nouveau from doing it for them.
Well, they don't play as nice as AMD, but they do at least give you working drivers. They do have place for consumers, there are valid use cases for Nvidia on Linux, such as:
3) (significantly) better performance per watt and less noise for high-end cards.
I still prefer AMD for their play-nice attitude with Linux and seemingly better track record on behaviour towards consumers, but Nvidia for the last years have been offering better hardware.
I have run a GT 1030 with a pair of 4K displays for a few months and the performance is terrible (for a desktop usage). I don't know where the bottleneck is (PCI bandwidth maybe) but you should avoid this card. I have replaced it with an RX550 and I don't have any performance problem anymore. It is not passively cooled, but the fan can be stopped from software if you want.
Given all the above, I'd call them "working", not working :)
> top performance available on the market;
Not anymore with new Navi cards. And competition in the Linux market will probably get even stronger with Intel releasing new GPUs with open drivers next year. I.e. there will be no reason to buy Nvidia anymore with all the blob downsides that aren't going to disappear.
For gaming 4K is really a read herring today, and for regular non gaming desktop use case, you don't need strong GPUs even for 4K.
> (significantly) better performance per watt and less noise for high-end cards.
Also not anymore with Navi.
So I'd say things today look pretty bad for Nvidia on Linux, and they'll be gradually losing Linux desktop market, until they will start playing nice with upstream. Time will tell if they actually will. I don't think they care though. And neither Linux users would care to use them.
> OP explicitly mentioned Sway (based on wlroots), not Gnome (mutter)
My parent commenter did not refer to Sway, that's why I rejected the claim that AMD is the "only way for you to get a usable desktop with modern features".
> even Gnome can't handle XWayland with Nvidia today. And if you play games, especially in Wine, that's still a requirement
I'm not sure what you're referring to. I've successfully launched and played Wine games with Nvidia proprietary drivers in Gnome with Wayland.
There is currently no accelerated GLX support when
running a GNOME Wayland session no top of the NVIDIA
drivers, meaning X11 OpenGL applications will use
software rendering.
I can't say I know the details, but I have played CS:GO (non-wine) and Overwatch on Wine without any special hacks with performance indicative of GPU, not CPU rendering. Perhaps it's because they're using Vulkan instead, or the docs are alluding to edge cases that I haven't encountered? I have very little technical knowledge of how it works, but I can say that it is absolutely working.
Or you know, just don't use the linux desktop. Nvidia's drivers are the only ones with a good Vulkan / OpenGL implementation, the other ones are underperforming severely. Nvidia also exclusively develops CUDA, so it basically it the only serious alternative for accelerating scientific workloads. Basically no-one cares what Kernel/Window system developers think as long don't cripple the Kernel severely enough that NVIDIA can't around their nonsense.
This is not about compliance but rather features and performance. When OpenGL was relevant they had by far the largest set of well thought out extensions (mutidrawindirect for example). Their Developer Tools were second to none and they had CUDA as a unique value proposition.
Their blob has no features advantage, and neither performance advantage anymore vs Mesa. While it is known to cheat and violate OpenGL spec to show better results.
I made a comment on this below, X already exists, why can't people live with X/fix X and thus utilize the already existing code vs recreate all the interfaces that X already has?
X is a dated mess at this point. Its core architecture assumes a model for drawing graphics that is more than teo decades out of date. There have been extensions - plenty of them - to drag it along and these have added substantial bloat. Also, some extensions where also awfully half-baked, turning the X11 protocol into nightmare to program against.
Wayland has some good ideas to modernize the stack based on passing GPU buffers between the compositor and applications, so that applications can use GPU acceleration and actual 21st century graphics technology to the fullest. But the non-graphics part of a desktop GUI are hard (e.g. event routing, clipboard, drag and drop, screenshots/capturing...). Wayland suffers from two things in this regard: first, it needs to catch up to a 30 year headstart thatball other DEs have with implememting that. Second, it is artificially hobbling itself by declaring absolutely trivial information privileged like the screen resolution and absolute coordinates of any window on the screen. This is a list of stumbling blocks and showstoppers that have to be worked around.
Indeed. Wayland seems tuned for Red Hat's need to ship a solid desktop that does one thing in one well-supported way. I'm going to miss X's configurability.
Apologies for the dumb question, but I'm looking for a linux desktop soon and some quick googling seems to say yes... but nvidia really doesn't work with wayland?
> The reality is that X.org is basically maintained by us and thus once we stop paying attention to it there is unlikely to be any major new releases coming out and there might even be some bitrot setting in over time. We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum, but let this be a friendly notice for everyone who rely the work we do maintaining the Linux graphics stack, get onto Wayland, that is where the future is.
The fair warning is appreciated. Anytime we hear something like this in open source (sometimes after the fact), I try not to prejudge, but I definitely pay attention. Linus Torvalds notwithstanding, the parties with the development resources effectively make the decisions. Sometimes such decisions turn out to still be controversial even years after being pushed onto Linux (like GTK3 or Systemd). I haven't yet used Wayland enough to guess how it will turn out.
My concern at the moment with Wayland (and Systemd, Pulseaudio, DBus and a few others) is that Linux is loosing modularity.
All these dependencies seem to be getting more or less mandatory. Running a distribution without them is getting harder.
Given that they are all getting pushed by the same group of people at Red Hat, it is concerning as they are effectively gaining control of the Linux userland. And given that their architectural decisions are quite contentious and at odds with many Unix design principles, this is not good.
>they are effectively gaining control of the Linux userland.
but they aren't gaining control by some dastardly power grab, they're gaining control because they're the only ones actually putting in the work. They're simply doing what they think is best for their product, while at the same time contributing their work back to the open source world. That is good, that's how it's supposed to work.
The only reason they're gaining control is becuase the rest of the linux userbase has decided that although they might complain about Red Hat's work on hacker news and deride it as "not good", it's actually good enough that nobody wants to use any alternative projects to the ones red hat works on.
> The only reason they're gaining control is becuase the rest of the linux userbase has decided that although they might complain about Red Hat's work on hacker news and deride it as "not good", it's actually good enough that nobody wants to use any alternative projects to the ones red hat works on.
The decisions they're making are what prevents others from doing that. When they make a bad choice you would be willing to write code to correct, you can no longer just replace that one component with one that works better, because everything is tightly coupled without clean interfaces between them.
Then since no one else can justify spending the resources to replace the entire Linux userland in order to correct that one problem, the problem remains, and they accumulate. And still no one else can do anything about it unless they have the resources to replace far more than the individual problematic component(s).
> Then since no one else can justify spending the resources to replace the entire Linux userland
Well that's just it, though. Red Hat is spending the resources to maintain and build much of the Linux userland. And they are no longer willing to do so in a way that helps support the userlands for the BSDs or the buffet mentality of building the Linux userland. If people don't want to use the Red Hat userland, then they _need_ to be willing to support a whole Linux userland that resembles what they want. Fighting a set of small individual battles just keeps losing.
So far, nobody seems to have gotten traction on this, I think partly because most of the opponents are only unified in their disagreement with what Red Hat is doing and partly because I think the opponents don't have the resources Red Hat has.
"Well that's just it, though, Microsoft is spending the resources to maintain and build the underlying standards of the PC ecosystem. And they are no longer willing to do so in a way that helps support Linux, the BSDs, or the bazaar mentality of open source. If people don't want to use Windows, then they _need_ to support an entire computing platform that resembles what they want."
I think they are gaining control via a “dastardly power grab”, as you put it.
Software that I have relied on for decades no longer runs, and it is invariably because someone at RedHat consciously decided to break compatibility with the existing Linux ecosystem in some functionality regressing way.
This has happened with the Linux kernel, pulse audio, dbus, systemd, logins, systemd logger, wayland, gtk2->3, and countless others.
Also the main issue I see with 'modern' Linux ecosystem actors, is that they usually do not put on the 'market' a new product that they developed (from scratch or after forking a traditional one); instead they first take control of a an existing piece of software or project, and then they change the way it works, or the direction it takes, or make it un-portable just because.
I wouldn't mind if they released new software, new concepts, new OSes and let the users (the competition? the free market?) choose. If they take the market share, good for them, but they didn't have to come and screw with the ecosystem I picked years (decades, actually) ago. Let the different ecosystems leave their lives. I chose this one 20 years ago especially because I wanted to avoid those things which, now, are put down my throat!
Would changes (switching) take a longer time to occur? Yes, probably, and I don't think it is a problem, considering today's tendencies in software development which exhibit more problems because of going too fast.
For example, I never read any user saying "I moved to a Redhat distribution because it has systemd, that's so much better" before a couple of distro maintainers (soon to be hired by Redhat) decided it had to be imposed on a distro and then another. Nobody really cared about the supposed advantages brought to the table by systemd, nobody demanded it. And yet...
Same thing for major changes of/in GUI toolkits. If they want breaking changes (as toolkit devs), if they want a complete redesign (as toolkit users), just fork the existing project and name it something else at least a little bit distinctive. Let the project as it is a chance to survive (no matter how small that chance is).
But no, they want a monoculture, they want to spread it as much as they can, and force everyone to follow their way, their idea of the day.
In fact it is a bit like globalisation, where a single culture, a single economic system, a single social system is supposed to eat the world and leave it with a single boring monoculture, the same everywhere.
Don't attribute to malice what can easily be explained by just not caring. The Red Hat people are getting their work done. Fixing their use cases for their users. Their salaries don't depend on upholding some holy Unix virtues. They depend on making Red Hat users happy and willing to pay.
Whenever you want to understand why someone does something then you should always ask what their motivations are. What do they want and what do they need? What forces are acting on them? What is motivating them? I cannot come up with a reason why Red Hat people would want to maliciously break the Unix model. But I do see how they don't have much motivation to uphold it either.
Yes, a lot of open source goodness is due to Red Hat, and a lot of open source programmers' livelihoods are and have been due to Red Hat.
Red Hat earned a lot of goodwill among early Linux people, and fortunately they were able to sustain their work with enterprise business revenue flow, of actual dollars (not just goodwill).
It's also true that a lot of people have been implicitly delegating a lot of foundation on which we depend, to Red Hat and a few other organizations. We build atop that, usually with only a little awareness of what's going on, and maybe we should be better informed about that -- as engineers with immediate responsibilities in our own work niches, and/or as altruistic participants in some broader worldwide collective effort.
There's no Red Hat. They got by IBM for $34 billion, I keep seeing a lot of reference to Red Hat in this post. Please call it, what it is. IBM. It's all about money. Much can be said about this, but money has really corrupted and polluted OSS.
Of course the possibility of Red Hat being ruined has been on everyone's minds since the acquisition.
But things akin to the Wayland push happened before that, so I wouldn't attribute that to post-acquisition.
I won't call it IBM, so long as Red Hat retains some autonomy to exercise their know-how, and distinct identity (not kill some of the golden goose for which they paid serious gold), and so long as most of the the most skilled Red Hat people are happy and stay. (If they tried to commoditize everyone as worker drones, that would be unfortunate.)
Also, FWIW, recall that IBM have for many years been a strong supporter of various open source. (Maybe too bad they didn't buy Java/Sun in the end.) For money reasons, of course, but there seemed to be a lot of goal alignment among open source people, and overlapping with libre people.
In the short time I worked at Red Hat (as part of the CoreOS acquisition), several of the redhatters in Raleigh mentioned how much they enjoyed working at Red Hat and how much they hated previous jobs at IBM. This was before the IBM purchase was announced.
I predict a mass exodus as golden handcuffs come off and employees start to chafe under new management.
Wherever they go, I doubt most of them will continue to work on open source software.
I'm not sure if it's a bad thing that these man-hours will be lost to the open source community. I think Red Hat’s involvement in open source has been double edged. Yes, they have funded projects that are crucial to the open source ecosystem, but their main source of revenue is support contracts. That means they have different incentives from most of the users of their software. If a big customer wants some feature, Red Hat tries to add it as fast as they can. If a big customer doesn't want to change their software to use new versions of some dependencies, Red Hat keeps a fork around and applies security patches. These behaviors serve to increase complexity and technical debt while also causing more headaches for maintainers of upstream software. It's very frustrating to have someone say, "I have a bug with your tool." and it turns out they're on an ancient version of libxml that has a bunch of custom patches applied.
Also, Red Hat wants to upsell people onto RHEL, which means shoving RHEL into places it doesn’t belong. I ended up quitting because I was tired of pushing back on boneheaded technical decisions made in the service of these incentives.
I don't mean to single out Red Hat. It's just the company that showed me how the sausage is made. I think these criticisms can apply to any company that develops open source software with the goal of making money from support.
Exactly. And for much smaller organizations and individuals, it is much harder to organize an alternative effort to this set of projects backed up by a big corporation (Red Hat).
I only expect this to happen if Systemd and related efforts become really bothersome. Right now it is a big of a mixed situation. A better init system, and some modern basic userland infrastructure was needed. Systemd has brought some good things, but overall changes are concerning for the reasons outlined in my parent comment.
Exactly. And for much smaller organizations and individuals, it is much harder to organize an alternative effort to this set of projects backed up by a big corporation (Red Hat).
There are exceptions. Drew Devault and the rest of the Sway team did write wlroots, an alternative compositor for Wayland, so Wayland is less of a monoculture.
But it is true that it requires a lot of manpower, money, and will to develop an alternative to e.g. Wayland. However, I see the current situation more as the glass being half full. Imagine that we did not have Red Hat, and others pulling the weight of Wayland (and other technologies, such as Flatpak), the Linux desktop ecosystem would in many ways be hopelessly behind macOS, iOS, and Windows.
I think this is a better outcome than being stuck with 80ies/90ies graphics technology forever, even if it is not perfect.
> It seems like a lot of those projects just aren't interested in accepting pull-requests from outsiders.
What kind of pull requests though? Minor fixes or big architectural decisions like ssh forwarding and security features?
Because there's a difference between just accepting PRs and public ownership of future project direction. The second usually doesn't result in a successful project.
That's kind of the point of open-source, public ownership. Sure, you need someone to be in charge and reject outright bad ideas, but when you're refusing to add options that support other people's personal preferences that's when it's starting to become a problem.
It's all these authoritarians that think they need to own the direction of the project that are causing these issues. I'm sure dirty info-hippies who believe in communal project ownership have their own problems too, and that there's a balance, but it seems like we've over-swung.
Do you have examples of working pull-requests which were rejected for what you consider bad reasons? I’m sympathetic to maintainers choosing not to take on additional technical debt so I’d especially be looking for things like willingness to shoulder long-term support costs and other constraints imposed by accepting a PR.
One thing that comes to mind is gnome's refusal to implement type-ahead in nautilus and the gtk3 file picker. It's a very popular and visible issue that I won't get into here, but suffice to say that there are a lot of people who would really like a more traditional type-ahead.
There aren't any specific pull requests I'm aware of although there are forked versions available.
When asked about it the developers involved just say "no", and any attempt to work on a compromise or discuss what would be necessary to include a flag somewhere deep in gcnonf to revert the behavior is met with a resounding "we're not interested in that feature". Which is fine, but don't keep other people from implementing a feature that they like just because it doesn't fit in with your vision.
That attitude seems pretty endemic to open-source projects involving a lot of (ex) redhat engineers. It's not that the PR is being refused, it's that it's impossible to even have the discussion.
If you don't like the current maintainers' policy on "future project direction", you can always fork the project.
Yes, this involves an increased maintenance burden since you need to actively review what the upstream is doing and figure out how to cleanly pull their changes - but guess what, that's the very same burden that dealing with your proposed changes would have placed on the upstream themselves! "Authoritarianism" doesn't really come into it.
There are ways to architect software so that it can be cooperatively developed, so that you can fork and maintain an individual component instead of an entire stack.
>Around the start of the millennium, Eric S. Raymond was one of the philosophical leaders of open source. His essay and book, The Cathedral and the Bazaar was obligatory reading for executives trying to understand open source. Now, after lying low for over a decade, Raymond is getting attention again, this time for two blog entries in which he rants about how so-called Social Justice Warriors (SJWs) threaten open source.
>[...] Raymond's call to defend open source from SJWs is as deceptive as anything he attributes to his enemies. Free and open source software is too widespread to need defending, and his attempt to create an issue does not map well on to actual events. For those who remember his earlier contributions to open source, his recent comments make for a disappointing epilogue.
If they are gaining control of the Linux userland because they are the only ones willing & able to put in the development effort, then IMO we can hardly complain.
Additionally, no alternative project is even providing close to the set of features that i.e. systemd or pulseaudio does, not even via modules. sysvinit for example is only an alternative to systemd the same way a SUV is an alternative to F1.
> sysvinit for example is only an alternative to systemd the same way a SUV is an alternative to F1
Perhaps. But this is the same tired old argument that ignores the existence of any other init/daemon-manager aside from systemd.
And I would use the same car metaphor for comparisons with some of the other options, esp. runit, except with systemd in the SUV slot (e.g. my runit systems all behave themselves; my systemd systems frequently hang on reboot, often needing a REISUB).
> Perhaps. But this is the same tired old argument that ignores the existence of any other init/daemon-manager aside from systemd.
Do you mean openRC, s6 etc? They still all use bash scripts, (which as a result depend on the dev for quality and can vary quite a bit) vs systemd's clear, uniform service definitions. They don't provide much beyond starting services and are more like wrappers around sysvinit than anything else.
> my runit systems all behave themselves; my systemd systems frequently hang on reboot
Every time I've seen this in practice it turned out the system was actually not configured properly, (usually crypttab or something related), and systemd has a default 90s timer to try to wait for everything to dismount, shut down properly vs other init systems just yank the cord, but it seems faster, so I guess it's 'better', (in which case you can just shorten the systemd timer to like 1s and it would be even faster doing that - look for 'DefaultTimeoutStopSec' in /etc/systemd/system.conf).
> Do you mean openRC, s6 etc? They still all use bash scripts, (which as a result depend on the dev for quality and can vary quite a bit) vs systemd's clear, uniform service definitions. They don't provide much beyond starting services and are more like wrappers around sysvinit than anything else.
I'm not that familiar with openRC or s6 (I've played with them, but don't use them on any 'real work' systems), though my understanding is that there is a proper openrc-init being developed.
I was actually thinking more of runit and shepherd, which provide their own init and daemon-management.
> > my runit systems all behave themselves; my systemd systems frequently hang on reboot
> Every time I've seen this in practice it turned out the system was actually not configured properly, (usually crypttab or something related),
This is across multiple systems, from Arch boxes, which I've hand-configured to vanilla Ubuntu boxes, and ranges in behaviour from multiple 90s timers to just hanging on completely black screen until forced to reboot. There are certainly no obvious configuration issues on any these, definitely not crypttab or the like.
> shut down properly vs other init systems just yank the cord, but it seems faster, so I guess it's 'better', (in which case you can just shorten the systemd timer to like 1s and it would be even faster doing that)
But, even timers aside, runit is faster than systemd on both boot and shutdown, for similarly-configured systems. And provides (me at least) a best user-experience than systemd.
runit shuts things down properly; with systemd I often have to literally 'yank the cord'.
I guess I did not explain myself properly, s6 is not actually a wrapper over sysvinit in practice, it's its own thing, like runit. But like runit it relies on bash scripts, which are not declarative and open to a wide variety in quality, just like with sysvinit.
Also, runit does i.e rely on logind, which everybody forgets systemd took upon themselves as consolekit was unmaintained. Why does runit rely on systemd's work here?
> runit shuts things down properly; with systemd I often have to literally 'yank the cord'
I've looked at how systemd handles shutting things down vs other inits and it's just a lot more paranoid, i.e. it waits for confirmation of every single service being shut down etc. In other init systems it tends to be a case of 'send shutdown signal, assume process shuts down'. You could fault systemd for this, but it's more of an application-level issue.
> But like runit it relies on bash scripts, which are not declarative and open to a wide variety in quality, just like with sysvinit.
In theory, this makes sense, with potential issues of poorly written shell scripts. In practice (I assume arising from its increasingly baroque growing complexity) systemd ends up with stability issues which aren't pleasant from an end-user perspective.
Setting shutdown aside, I had a horribly difficult-to-debug boot issue with systemd: a daemon failing in an unexpected way caused a failure to bring up any services, but without systemd failure messages. The problem was a systemd emacs service. On one boot I had an issue in init.el which caused emacs initialisation to hang, so the root cause wasn't systemd. But systemd didn't fail gracefully (or informatively) in this case. And it took me a while to figure out why systemd wouldn't bring up any services, since it wasn't reporting to me that the emacs service (or any service) had failed.
I've since reverted to launching emacs daemons via .xinitrc or the like on my systemd boxes, so that any emacs init issues don't pull the rest of systemd down.
> On one boot I had an issue in init.el which caused emacs initialisation to hang, so the root cause wasn't systemd. But systemd didn't fail gracefully (or informatively) in this case. And it took me a while to figure out why systemd wouldn't bring up any services, since it wasn't reporting to me that the emacs service (or any service) had failed
If it’s hanging it hasn’t failed. If that daemon is configured as a blocking dependency, waiting is the correct thing to do — this is why timeouts and health-checks are as important as making sure that your dependencies are as flat as possible.
if your systemd isn't able to "shut things down" quickly, that's the fault of an application not responding to a shutdown signal and systemd waiting before it kills it.
While I generally understand your sentiment, I can't see how does it apply to Wayland in particular. You can replace Wayland with X11 and basically make the same argument.
You can implement Wayland compositor on top of X11 protocol if you wish. You can also implement X11 server on top of Wayland protocol (and that's actually already done and widely used, called XWayland). Given the oddities and architectural baggage of X11, designing a new protocol from scratch is really welcome there.
(of course you can argue whether some decisions made while designing Wayland core and its extensions are sensible, but that's a completely different discussion)
XWayland isn't a normal Wayland client - it requires special assistance from the compositor. In particular, the compositor acts as the X window manager and there's a special interface between it and XWayland on the Wayland side to make everything work.
I don't think it's possible to even implement something like XWayland as an ordinary Wayland client. By design, ordinary Wayland windows are not permitted to know or control their on-screen position, and X relies heavily on apps being able to do both. This is also a major reason why Wine will never support Wayland in the same way that it does X.
Basically, if your preferred compositor ever gets fed up of maintaining support for running X applications, the only way you'll be able to run apps that haven't been ported to Wayland is in an emulated desktop confined to its own window. Expect this to happen sooner rather than later given the current attitudes to backwards compatibility. (Think about Ubuntu dropping support for 32-bit applications, for example.) On the positive side, at least Windows 10 doesn't suffer from the same limitation...
I never mentioned implementing XWayland as an ordinary Wayland client.
Basically any sensible Wayland compositor you may want to use on your desktop already has XWayland properly integrated. If one of them drops it, you can switch to a more sensible one.
It sounds weird to me to claim the Wayland ecosystem is less modular than the X ecosystem. With X, you're depending on one huge monolithic display server implementation; with Wayland, you can switch display server at will and they all conform to a common protocol.
I get the concern that most of the functionality which used to be in separate programs is now bundled with the Wayland compositor, and I used to share it, but after getting more involved with Wayland stuff, I don't think it's really that big of a deal. The wlroots people have been really good at introducing new common protocols for stuff people actually need. The Wayland ecosystem, at least outside of Gnome, is actually really nice and improving.
With X, you can switch your display server as well (and there were proprietary X server offerings available as late as the late 90s/early 2000s). It's just that Xorg, being internally more modular than XFree86, sapped the value add of redoing the entire X server and effectively "won".
In fact, with X you can switch out your display server and keep your window manager. Can't do that with Wayland -- by design.
> In fact, with X you can switch out your display server and keep your window manager.
No, that's not a fact, that's a theory. In practice, there is only one display server that supports modern devices, Xorg. No modularity is lost by replacing a single display server implementation by a single libwayland implementation.
Wayland is more modular in other ways, too. We went from IPC-based modularity with separate processes towards modularity via relegating different tasks to different libraries. Like being able to use libinput and libdrm and EGL and libwayland all separately makes for a much more modular stack than putting it all in X.
> Remember that udev and sysfs are written by the same people, working together off-list. They're free to break the exported data format on a whim, because they write the code at both ends and fundamentally they're talking to themselves. They honestly say you can't expect a new kernel to work with an old udev, and they say it with a straight face. (To me, this sounds like saying you can't expect a new kernel to work with an old version of ps, because of /proc.)
> Documentation is a threat to this way of working, because it would impose restrictions on them. A spec is only of use if you introduce the radical idea that the information exported by sysfs exists for some purpose _other_ than simply to provide udev with information (and a specific version of udev matched to that kernel version, at that).
Wayland is actually more sensible than X. People used to say X had become an OS itself, having memory management, font management, a full drawing API (not used by modern GUI toolkits), and more.
One thing transitions like this do is cause the death of unmaintained programs. In some cases that's good, others bad. But ultimately if no one cares enough to modernize it that says something. But we also shouldn't keep developers on an endless train of changes to keep up with. Wayland is like 10 years in the making, so there really is no excuse not to be there already.
> People used to say X had become an OS itself, having memory management, font management, a full drawing API (not used by modern GUI toolkits), and more.
That's OK. I used X terminals from HP in the early 90. 1024x1024 monochrome with an Ethernet connection. Their only purpose was to manage the screen, mouse and keyboard. I don't know which OS they were running on (something proprietar?) but probably X11 was the only application they were running.
Haven't seen a case of dbus daemon replacement, only I think one or two client libraries, and one is niche and the other is so niche I think less than 10 people ever used it.
AKAIK kdbus re-implemented that half of it, albeit in the kernel not as a daemon. I've not been given the impression that there is anything preventing additional implementations.
The strongest technical argument against dbus I've yet heard is that it's too slow for file transfers or something... well fine then, don't use it for that. But for plenty of other stuff it seems to work just fine. People complain about the performance but not once have I ever opened up htop and thought to myself "Gee wiz, dbus sure is breaking my balls today."
From my perspective, most of the hate against dbus seems to be fallout from the systemd feud, to which dbus is tangentially related (if systemd sucks, it's not because of dbus.) Maybe with a few cat-v/suckless fanatics thrown in on the sidelines..
kdbus was afaik not-exactly-compatible (at least not wire protocol compatible), and as its main reason for existence had speed.
And finally died after Linus went to town on dbus-daemon pointing out a bunch of fixes that made it much faster (though I don't know if anyone turned that example into actual patches for dbus-daemon).
As for arguments against d-bus - It's a mess. The protocol is variable-endianess for no reason, back in the very beginning the XML interface spec got significant criticism because introspection required punting and parsing XML.
Personally I'd also point to mess of multiple identifiers - last time I tried to figure out details for third-party implementation, I got 3 different addresses per object, whereas sometimes I believe just a reference would suffice.
Also, unlike its (probably closest) predecessor DCOP, it's very hard to use d-bus outside of writing a complex program.
Systemd is a factor, but it started years before. D-Bus is "desktop bus", and was rather obviously designed to be the interconnect between applications on the desktop. It seems to me to be preciously rare in its original goal of "COM-like" interop on desktop. Then it got started to be used for system daemons. HAL was somewhat reasonable if problematic. PolicyKit and ConsoleKit ... not so much. Those two laid foundations for badly described but "crucial" APIs provided by logind and systemd that were forced down everyone's throat later on. Now that I think of it, NetworkManager taking years to being configurable without gnome panel might have a hand in it too (and cast a shadow on D-Bus due to NM developer arrogance in the past).
So, generally, a bit of technology, a containership of politics.
One thing that a great majority of commenters here seem to be missing is that Wayland's issues that make it unsuitable to replace X Windows will not just be ironed out in a couple of years (at least not in a way that would be an improvement over X), because it is flawed by design.
The thing is, Wayland developers do not want you to take screenshots or automate input events (injection and interception). Those are both "power-user" and "accessibility" features. So respectively those who like to use the Unix (or other OS) programming environment to its full potential (hackers?), and the blind/visually impaired will have a hard time if Wayland gets forced on them.
It is possible to solve those problems with effort on a per-compositor basis (meaning less choice for users and more redundant programming effort - programs that interact with GUIs will need to have separate code for each compositor!), or with protocol extensions - that, of course, would not be universally accepted. For example, I think no compositor currently give a Wayland user the option to mess with input events (key presses, etc.). This means no hot-keys!
Quoting Red Hat: "Furthermore, there isn’t a standard API for getting screen shots from Wayland. It’s dependent on what compositor (window manager/shell) the user is running, and if they implemented a proprietary API to do so."
An interesting Reddit discussion: "It has been almost a decade, why does Wayland not have a protocol definition for screenshots?" - answer - "Because security, dude! Wayland is designed with the thought that users download random applications from the interwebz which are not trustworthy and run them. Wayland actually makes a lot of sense if you don't think of Linux desktop distributions and desktop systems, but of smartphones. But for some reason we absolutely need this technology on the desktop, like we had not enough pain and lose ends over here without it." [7]
See [1] [2]. And my previous comments on the same topic: [3] [4].
Another thing wrong with Wayland is that forced compositing means noticeably (in interactive applications) more latency.
Small nitpick regarding the blog post: Chromium depends on GTK3.
Maybe the death watch for the traditional Linux desktop has also started...
Linux distributions have had a decade long slide in their usability and polish. Much of this has been due to an abandonment of its historical UNIX roots in favour of half-baked "modern" replacements.
Being a UNIX replacement with a full X11 server and all the rest was what made it compelling and practical in the first place.
However, it's coming to the point where it's simply getting unusable. And I say this after using Linux on the desktop as my primary user and development environment for over 22 years at this point. If I want actual UNIX, I can run FreeBSD in a virtual machine, or even on the bare metal. Otherwise, I might as well resign myself to fate and use Windows with WSL or VMs for everything else. If I'm going to be forced to use something I dislike, it might as well be something that properly supports all my hardware and I can be productive with despite its annoyances.
I come from an environment where we have a reasonable number of Linux workstations; approaching 2000 in the company I work for, and more in the wider field.
> Being a UNIX replacement with a full X11 server and all the rest was what made it compelling and practical in the first place.
Yes, here as well. We have big apps, small apps, legacy apps, remote apps, all doing real work.
For the enthusiast user base, who mainly live within the distribution's ecosystem, this sort of churn in the platform is exciting and interesting, or feels like progress. The platform is the end result.
For us, the platform is just that -- a platform on which sits a lot of vendor applications and custom tooling assembled over decades with its idiosyncrasies carefully accommodated.
This churn isn't good. Anything short of 100% compatibility is going to prompt a risky re-evaluation of what's out there on the market -- risky because Windows is in the strongest position, not Linux.
Linux is already in a weakened position than it once was as a workstation OS. My colleagues will be enticed by a Windows platform which has an emerging Linux compatibility; and a tried-and-tested build of most of the vendor-driven GUI applications and GPU drivers, too.
>I say this after using Linux on the desktop as my primary user and development environment for over 22 years at this point
Five years ago, after 15 years of using Linux on the desktop, I bought a Mac Mini for my daily driver. I have not regretted that decision once. For anything needing more compute power, I can shell into a beefier Linux server without a GUI. I still develop the same sorts of apps that I did before the switch, and am still intimately familiar with the same (application programming) interfaces as I was in Linux, just now my sound doesn't randomly stop working, my disks don't refuse to decrypt, and my video doesn't randomly stop working because somebody at Red Hat decided that the graphics libraries I was using were naughty or that I should have less choice in video cards by excluding support for the most performant video cards available at the time I bought.
> Otherwise, I might as well resign myself to fate and use Windows with WSL or VMs for everything else. If I'm going to be forced to use something I dislike, it might as well be something that properly supports all my hardware and I can be productive with despite its annoyances.
I was going to make my own top-level comment, but you've said everything I wanted to say.
The day X dies is the day I switch back to Windows. I can simply use WSL and KDE Applications to get something very close to the parts of Linux I like.
I can't say it any better than JWZ did over 15 years ago. This is the result of the Cascade of Attention-Deficit Teenagers model and is why Linux will not approach Windows or OSX in usability.
Cannot link to his site:
JWZ dot org /doc/cadt.html
A complete teardown and rebuild of the whole ecosystem (because X succccckkks) is fun! Making something backwards-compatible, or standardized, or polished that "last 90%" so it works on everybody's environment... is not.
Four or five years ago I had contacted jwz about what seemed to be a glitch in the Pac-Man Xscreensaver (sometimes the blue maze and dots wouldn't appear). He suggested it was likely a video-driver issue, potentially tied to hardware acceleration. I had replied something along the lines of "oh well, probably things like this will get better with Wayland, even if it introduces other issues".
His response was: "LOL! Yeah you can't even imagine how much worse it's gonna get. Pretty soon the Linux desktop will just be a little hammer that pops up and smashes your knuckles at random."
The CADT development model is very real, but it's not really a good description of the Xorg situation. The most active Wayland developers are not only also Xorg developers but have been so for a very long time. They know their history.
X11 has some fundamental problems. It's not fun using a modern desktop where it is impossible to sandbox individual applications. Firefox frequently processes untrusted data and is run with a dedicated uid, but it is also an interactive application and can suddenly listen in on all my passwords entered in other applications. Virtualized appliations are referred to using dumber protocols like VNC and SPICE which isn't always ideal.
Wayland has some interesting design issues on its own but that doesn't make the developers ignorant. XWayland will stick around for the foreseeable future.
> The most active Wayland developers are not only also Xorg developers but have been so for a very long time
Amusingly, I often hear this argument from Wayland proponents. Do they realize, that those very same developers have failed at maintaining Xorg code base and fixing it's bugs in backward compatible way?
To run a project into the ground and wash one's hands of it... is not worthy of endorsement.
It's important to know that there were once many implementations of X11. Xorg is the only one that hasn't failed. Everything else is unmaintained now.
This may be a matter of different perspective, but from my point of view the Xorg maintainers did a monumental work modularizing and cleaning up the codebase. They kept integrating modern extensions and have kept X11 relevant all this time. Try something pre-Xorg to compare.
> To run a project into the ground and wash one's hands of it... is not worthy of endorsement.
But to run a project into the ground and not wash one's hands of it is an even worse endorsement.
You might not trust the Xorg devs to develop you a new window system, but at least you'd trust them to know if their own, existing code has fundamental problems.
No. An X11 connection forwarded with -Y can receive all keyboard events, capture all screen pixels, inject mouse/keyboard events and read/write the clipboard. A malicious X11 client app can totally pwn you, forwarded or not.
X does suck, it's incredibly insecure and the workarounds for regaining some sense of application isolation are obtuse and resource-draining. I could go into all the little technical details but the fact is, X has been around for decades and its core was made for much simpler machines.
It's time for something new. This isn't attention-deficit developers at play. People have spent a lot of time on the Wayland ecosystem. This isn't just some kids hacking away in their basements ignorant of better development practices.
You argument holds up for many Linux components, but not for X/Wayland. It's preposterous to take the stance that no component in the entire Linux ecosystem ever needs to be replaced.
X is very interdependent, and a lot of it isn't fully modularized. When you need to change the entire foundation, knocking down a few walls won't get you there.
It sucks, I know. Decades of hard work "gone". But think of it this way, isn't it often said you shouldn't be afraid to throw away old prototype code? X is a decades-long foray into graphical computing, but perhaps it's time to throw away the prototype and start over with what we've learned from the process.
I hear this all the time. But so far there was not even one incident where major hack was done via X11.
The reason for that is that people who use X also exclusively use trusted apps. The Wayland security model makes many generic tasks insanely complicated. The real world security gains will be zero. Security through obscurity never worked.
Poison a single common open-source daemon which is used in a lot of systems; That daemon will have access to all keystrokes, window content, etc.
> The reason for that is that people who use X also exclusively use trusted apps.
Well that's just not true. What does trusted mean? Most of the software on our desktops is unaudited. We don't have insight into the entire chain of deployment for more than a handful of applications we feel like keeping up with.
On a side note, that would be a wonderful service; A man-like utility which simply displays change-logs coupled with relevant source-code diffs. Reduces the surface of trust down to one party. Something like that would require financial backing, however.
> This is the result of the Cascade of Attention-Deficit Teenagers model and is why Linux will not approach Windows or OSX in usability.
While I definitely agree with the CADT issue; despite it, the Linux desktop experience still beats the Windows or OSX one. If the Linux desktop is "a little hammer that pops up and smashes your knuckles at random", the Windows and Mac desktops are big hammers that smash various body parts at random.
Windows and OS X are well-known for constantly deprecating APIs and replacing them with new ones. So clearly the best way to compete with them is to insist on dogmatic adherence to 30-40-year old APIs that are known to be horrible matches for how modern software stacks (including competitor OSes) actually work.
Windows supports core APIs for longer than Linux, or in fact, X11 exists.
It's actually possible (and was demonstrated) to upgrade from 1985 Windows 1.01 all the way up to 32bit Windows 10 and run windows 1.01 application without changes.
Arguably backward compatibility is the true reason how MS got to dominate OS market - you had good chance your software was just going to work on new OS version.
Well that's because if you click on that link from HN it uses the referrer details provided by your browser to show quite a rude message and picture ;)
This particular juvenile humor says more about JWZ than about HN. Just look at the example ITT. The "CADT" metaphor doesn't have much to do with the developers either of X or of Wayland.
( I don't doubt for a minute that someone will pick up the slack, but that makes me wonder, who's gonna pay for it? Would you contribute to e.g. a Kickstarter for X Windows? FWIW, I'm not sure I would, meaning no disrespect I've been hoping to ditch X for something better ever since Don Hopkins opened my eyes to NeWS, &c. )
> Arcan is a powerful development framework for creating virtually anything between user interfaces for specialised embedded applications all the way to full-blown standalone desktop environments. Boot splash screen? no problem. Custom Interface for your Home Automation Project? sure thing. Stream media processing? Of course.
> It has been used in a number of both experimental, hobby, academic research and commerical projects and products alike, in areas ranging from VR desktops to industrial computer vision systems for robotic automation.
> At its heart lies a robust and portable multimedia engine, with a well-tested and well-documented interface, programmable using Lua. At every step of the way, the underlying development emphasises security, performance and debugability guided by a principle of least surprise in terms of API design.
Thank you, I hadn't heard of it before so I'll check it out!
I'm not optimistic about Wayland, since early on they decided not to use an extension language, and that's not something you can have a change of heart about later, and then just nail onto the side.
It would be interesting to see how Arcan uses L
ua, which is a great language for that kind of stuff. It's a lot smaller and sleeker and better designed than JavaScript.
The Arcan developers should check out how Factorio modding works! (But then all work on Arcan would halt when for months while they were addicted to Factorio.)
Lua's main problem is that it isn't JavaScript (i.e. in JavaScript's enviably lucky position of ubiquitous dominance).
If I had a time machine, I'd go back and try to convince Netscape to use Lua 2.1 instead of inventing JavaScript (released December 4, 1995). And hire the Self guys (Dave Ungar, Randy Smith and the crew who eventually made the Java HotSpot compiler) away from Sun, and Mike Pall (LuaJIT) from wherever he was!
>Lua 2.1 was released on 07 Feb 1995. Its main new features were extensible semantics via fallbacks and support for object-oriented programming. This version was described in a journal paper. Starting with Lua 2.1, Lua became freely available for all purposes, including commercial uses.
I get paid to work on Arcan so I don't think factorio would have that big of an impact ;-)
The initial choice of Lua for Arcan was based on its use, at the time (2004-2005ish), in World of Warcraft. The UIs that people were hacking together in WoW even with little to no serious programming experience was way more advanced than what would ever be needed for desktop interfaces so it seemed like a good fit. Even for other projects, Lua is still my goto "safe" keyword for C.
There are more interesting properties though, especially how the VM- bindings integrate with C. Stay away from using too much metatable magic etc. and what you get is an ok "protocol+binding" in one. Substitute the stack for socket read/write, add an identifier to function translation and you can chose between in-process or stronger separation. The reality is slightly more involved, but not by much.
WoW was what convinced me how practical and powerful Lua really was, too. And that there were people out there that knew how to program it really well.
The WoW "Auctioneer" mod in particular was quite advanced and elegantly written. It would analyze the prices of items in the auction house over time, and help you price and sell your own items competitively. Not just lots of fancy user interface, but some respectable data wrangling and number crunching too.
One important thing about Lua is how cleanly and efficiently it integrates with C code. TCL/Tk also has this virtue, but TCL was a terribly designed (but brilliantly implemented) programming language. While both Lua's design and implementation are quite excellent. It also has a great community, plus RMS never declared a holy war on it, either!
> TCL was a terribly designed (but brilliantly implemented) programming language.
What do you dislike about Tcl? I always kinda liked it.
> While both Lua's design and implementation are quite excellent.
The implementation is pretty good, but 1-based arrays are pretty evil. I don't know if I like the way arrays & tables are conflated either: part of me thinks it's clever, and part thinks that it's too clever by half.
It's also pretty verbose.
Something I like about both Tcl & Lisp is that they have a good built-in messaging format (Tcl object notation in the former, and S-expressions in the latter). Lua doesn't really have something similar, which is IMHO unfortunate.
Some (most) details from recent years are subject to NDAs of various sorts - the rough areas are in some computer vision projects for automated IoT/mobile device testing, runtime "VJ-ing" with watermark injection/removal and embedded device forensics.
'Desktop/display server' parts is just a sidetrack that I do to get about as far away from what is otherwise happening in that space as possible, I need a sane work environment that doesn't keep making decisions behind my back or is laughably unreliable, and this turned out to be the easiest, albeit time-consuming, route.
>> I'm not optimistic about Wayland, since early on they decided not to use an extension language, and that's not something you can have a change of heart about later, and then just nail onto the side.
I don't see a need for an extension language for a compositor. A miminum of DE features will be implemented in Wayland compositors and everything else will be other programs - write them in what you like. Or am I wrong about the partitioning on Wayland?
The extension language is most useful for the window manager, which should be running in the same address space as where the events are being generated and processed.
I think this was tried with GNOME Shell and turned out not to be such a good idea. It runs JavaScript in its Wayland compositor/window manager and that reportedly causes unreasonable problems with lag and unresponsiveness. The Purism people were advised to write their own compositor/shell instead for the GNOME/GTK-based Librem 5 phone.
Maybe that's a Wayland problem though, with the way it combines the windows manager with everything else into one process.
I'm really not very sad to see X go. It was old, creaky and insecure. As advanced and cool as it may have been at one time, and as much as I respect the people who worked on it, it's time for it to go.
As for the people complaining about Wayland possibly missing some of X's features, remember that it takes time for something to mature, to accrete features and fixes, and Wayland hasn't had that time yet. If we give it the time and support it could and almost definitely will grow into something even better than X. We can't stay on X forever without incurring even worse results on ourselves than what we'll get when we switch to Wayland.
And another thing. For the people who complain about Wayland in the same breath as PulseAudio and SystemD: this tells is much more about you than about these projects. The only thing Wayland has in common with the other two is that it's new, and that you don't like it. It's more modular and more UNIXy and more open than X, arguably.
> As for the people complaining about Wayland possibly missing some of X's features, remember that it takes time for something to mature, to accrete features and fixes, and Wayland hasn't had that time yet.
It has been a decade, precisely how long does it need to catch up? I ask because from what I can tell it is still missing features that were standard in Windows Vista in 2008, when that OS switched to DWM and WDDM.
It's really not worth it when you are facing huge downside in usability on things that should have been accounted for in core protocol(-set), but a decade later you need to face aloha-quality, unstable warring incompatible proposals.
My own ideas (for improving X system, although maybe it could be used with Wayland too, I don't know) is using proxies to correct those kind of problems.
With pulse, wasn't the main complaint that it was buggy for too long after being pushed on users? Seems similar to Wayland (the ecosystem) in that regard
Is there any plan by Wayland to incorporate ssh forwarding? Last I heard it wasn't in the cards, and I think that feature is useful enough; it's probably the only feature I really would miss.
This is pretty interesting! I'm worried it's going to be a long path to getting things up and working again like they do with X today though. For example, my typical remote-X use-case is connecting to a Linux server from my macOS laptop. I haven't been following Wayland and/or waypipe enough to know how feasible that kind of setup would be with them. Are they potentially cross-platform enough that one end could run on macOS?
No not really similar. It is basically a raw video that transfers the full window content over the wire.
X11 is capable of drawing primitives. Even though the most popular toolkits (Gtk, QT) sadly did not use them, those who did (e.g. Athena, Tcl/Tk) worked perfectly over the network, even modem lines.
If Wayland would contain modern drawing primitives like those that Cairo offers which could easily be serialized, we would have real network transparency even over slow low bandwidth connections.
It is basically a raw video that transfers the full window content over the wire.
Because that's more efficient.
If Wayland would contain modern drawing primitives like those that Cairo offers which could easily be serialized, we would have real network transparency even over slow low bandwidth connections.
Nope, drawing commands for a modern UI are more bytes than the window contents. This is a deeply counterintuitive topic and it's unfortunate that nobody implemented waypipe until recently so it was impossible to measure.
Any proof for this? I don't believe you. Every SVG is smaller than the rendered PNG counterpart. And SVG is a really inefficient way to serialize drawing commands.
Also including drawing into your Compositor (like Windows and MacOS do) gives you additional infinite scaling for free which would solve the multi dpi monitor problem.
Very little modern software uses the X drawing functions. GTK uses Cairo, but so does X (that's where it came from, the library behind the X drawing API). But apps will use the GTK implementation of Cairo (client side) because it works on other platforms as well (windows and OSX). In most cases it doesn't actually make sense to put your drawing functions in the display server any more.
Cairo wasn't the library behind the X11 drawing API, it was originally the Xr rendering extension, that was an alternative to the original X11 drawing API.
>The name Cairo derives from the original name Xr, interpreted as the Greek letters chi and rho.
You're right, it doesn't actually make sense to put your drawing functions in the display server any more (at least in the case of X11, which doesn't have an extension language to drive the drawing functions -- but it did make sense for NeWS which also used PostScript as an extension language as well as a drawing API).
So Cairo rose above X11 and became its own independent library, so it could be useful to clients and toolkits on any window system or hardware.
Here's some email discussion with Jim Gettys about where Cairo came from:
From: Jim Gettys <jg@laptop.org>
Date: Jan 9, 2007, 11:04 PM
The day I thought X was dead was the day I installed CDE on my Alpha.
It was years later I realized the young turks were ignoring the disaster
perpetrated by the UNIX vendors in the name of "standardization"; since
then, Keith Packard and I have tried to pay for our design mistakes in X
by things like the new font model, X Render extension, Composite, and
Cairo, while putting stakes in the heart of disasters like XIE, LBX,
PEX, the old X core font model, and similar design by committee mistakes
(though the broken core 2D graphics and font stuff must be considered
"original sin" committed by people who didn't know any better at the
time).
So we've mostly succeeded at dragging the old whale off the beach and
getting it to live again.
From: Don Hopkins <dhopkins@donhopkins.com>
Date: Wed, Jan 17, 2007, 10:50 PM
Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.
A lot of that old X11 stuff was thrown in by big companies to shill existing products (like using PEX to sell 3d graphics hardware, by drawing rotating 3-d cubes in an attempt to hypnotize people).
Remember UIL? I heard that was written by the VMS trolls at DEC, who naturally designed it with an 132 column line length limitation and no pre-processor of course. The word on the street was that DEC threw down the gauntlet and insisted on UIL being included in the standard, even though the rest of the committee hated it for sucking so bad. But DEC threatened to hold their breath until they got their way.
And there were a lot of weird dynamics around commercial extensions like Display PostScript, which (as I remember it) was used as an excuse for not fixing the font problems a lot earlier: "If you want to do readable text, then you should be using Display PostScript."
The problem was that Linux doesn't have a vendor to pay the Display PostScript licensing fee to Adobe, so Linux drove a lot of "urban renewal" of problems that had been sidelined by the big blundering companies originally involved with X.
>So we've mostly succeeded at dragging the old whale off the beach and
getting it to live again.
Hey, that's a lot better than dynamiting the whale, which seemed like a such good idea at the time! (Oh the humanity!)
From: Jim Gettys <jg@laptop.org>
Date: Jan 17, 2007, 11:41 PM
> Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.
Yup. Cairo is really good stuff. This time we had the benefit of
Lyle Ramshaw to get us unstuck. Would that I'd known Lyle in 1986; but
it was too late 3 years later when I got to know him.
I would be careful about describing cairo’s Primitives as modern. Cairo-style rendering apis and implementations are struggling to keep up with modern user interfaces and modern resolutions. It isn’t super clear which way 2d rendering apis will go in future. It seems silly to me to bake such an assumption into a protocol
I see this a lot and now I need to ask: do you actually use it and find it usable?
Every time I've tried it, it was plagued with problems (fonts, HiDPI issues, different environments) and horrible performance. I mostly had to resort back to NX, which is okish, but not better at all than Windows' RDP or better VNC solutions. So what's the big draw here?
I use it, but generally only at work on a local network, where the performance is fine. My most common use is to run R on a big server but still have plots pop up on my laptop. There are other ways to do this, e.g. one of the IDEs for R (RStudio) apparently has some infrastructure that would let me run R on my laptop and then farm out the computations remotely. But I find just ssh -X the easiest for now.
It was pretty great when I was a university student using X forwarding within the campus network between my dorm and the CS department (in either direction). Certainly not as fast as local, but extremely usable when not trying to do graphics-centric things. (Even using Eclipse was viable.)
Your mileage may vary by use case and bandwidth, especially over the public internet. But if someone has a similar use case to mine in 2019, it's still probably usable.
Even if it's usable, it's probably not a good idea. X windows are not rendered by the server anymore. The client (i.e. the application) renders them and just transmits a full pixmap to the server. Depending on the redraw strategy, you could have a worst-case scenario where 60 frames per second of the entire window are transmitted completely uncompressed (except for the compression that SSH applies). You'll be sailing much more efficiently by using a VNC client.
To be pedantic, this is true with "modern" toolkits (like, this cemtury). Applications using Motif or Xt or anything similarly ancient still do server side drawing and they absolutely shine with todays network latency and speed. But most applications moved on from.these toolkits, with good reason.
+1 to this. For my first two years the university didn't have student Matlab licenses but the CS department machines had copies so I would ssh -X for my coursework requiring Matlab (needed the GUI to see plots).
It's not only ssh. I start GUI applications as a different user than the current desktop user constantly. It's a way for me to separate uses from each other. It gives you a huge amount of flexibility.
I do a lot of these things with Linux, you cannot do in Windows:
- Log in, start an arbitrary program. Then log out, without the program stopping
- Start a GUI application as a different user than the one logged in graphically
- Start a GUI application that resides inside a container
- Start a GUI application in the local LAN over ssh
Take those away and you take away the biggest edge Linux has over Windows.
Re. performance. Did you use SSH's “-C” switch for data compression? The first time I used X over SSH it was lagging as all hell, until I used this switch, and after that everything ran much smoother.
We use it semi-regularly at work. (I do Linux Administration for a hosting company.) Or, more precisely, our Oracle team uses it, as there seem to be several Oracle-related utilities that require a GUI. I don't work with it directly, but I have helped to set it up on that team's behalf from time to time. For that purpose, it seems to be quite stable.
I don't think X-forwarding is a good substitute for VNC(-alikes) if you need to do long sessions, but for occasional one-off GUI work where either there's no CLI equivalent, or the CLI version isn't the best choice, it does get the job done.
X forwarding seems to work enough for my primary use case. I have some supermicro servers for work with a java applet for KVM. Because of a series of reasonable but unfortunate policies, I can't run the applet directly in the laptop's host OS. The best way is to run it in a VM. It's a lot better experience for me to run the applet over X forwarding despite some issues with window resizing that I suspect are related to hidpi, than to run a full X desktop in the VM. Host is osx, guest is freebsd with linux compat and oracle jre for linux.
It was also useful when I wanted to output some pixel graphics from Erlang to my Windows desktop from a program running on unixy devices. x server windows, x client on linux, freebsd, os x -- occasionally also ran with os x as x server and linux or freebsd as x client.
The key thing for me is the ability to export the gui for a single application on one system to a desktop environment on another --- I think this is possible with other systems, but it's not easily exposed. Performance and fidelity of the experience is usually not that important for me, as long as it's usable.
I use it regularly with VMs. I find it a much better experience to have a bunch of terminals open on my host that are ssh'ed into the VM and can open new GUI windows on the host (generally meld for either git-merge or git-diff).
The main draw is just that it works so seamlessly. All you have to do is remember to add the "-X" flag when you call ssh and everything just works.
I use this a lot at work. I ssh into a server from my linux desktop and run things there, including GUI applications. Over a local network it works fine. It's much nicer to have local and remote applications displayed in the same way. I can also run things from remote networks, although I have to use the -C compression option on ssh.
I use it constantly (including across different OS/architectures), have been using it for more than 20 years and really can't live without it.
It works great. And Mozilla/Firefox has been working wonders with it (when you open a web link on a remote application, it opens the URL in the local, already opened Firefox and has been doing so since Netscape times).
With a 500Mbps internet connection and a 10ms ping to the server, the performance is quite usable. There is some lag when updating large bitmap areas, but for anything not graphic intensive, it works just fine.
Believe it or not, there are production environments that use X-forwarding over ssh. Modern computing relies more and more on remote streaming of interfaces. Even games are going that way.
I'm totally not impressed with the so-called "successor" to X... Wayland.
It might be unrelated, but I just upgraded to F30 and I'm seeing all sorts of weird graphics / screen drawing related bugs using Wayland. And while it looks like somebody is finally making some meaningful headway on a remoting story for Wayland, it's still not truly on par with X in this regard.
So sure, kill X... but we should probably have a replacement that actually works, first.
While I appreciate having choices at some levels, for infrastructure it's always good to have everyone agreed. Desktop Linux has been held back by the X11/replacement situation for years (there, I'll say it), and the sooner we can all get (back) on the same page, at least for this one piece, the better.
Many open-source communities are run by perfectionists (and I must count myself among the worst of them). While it's a noble goal to create a gem that will stand the test of time, it's disheartening to see proprietary products move faster by making a decision, even a lousy one, and then shipping it and making it work.
If the future of X is uncertain, then X is already dead. No one wants to base their OS on something without a clear maintenance and development roadmap.
It's not like distros contributed all that much to X in the first place. Most likely, X will remain until Wayland is ready - which it isn't yet, 'prototype' support for some very common use cases does not count as support.
I think they might have meant “pave their way”. Like they said, not all OS’s are products and some people have specific use cases in which X11 is the only viable option, whether that’s personal or experimental.
I wouldn’t know what those use cases might be. That’s just my interpretation of the comment.
F R A Hopgood, D A Duce, E V C Fielding, K Robinson, A S Williams
29 April 1985
This is the Proceedings of the Alvey Workshop at Cosener's House, Abingdon that took place from 29 April 1985 until 1 May 1985. It was input into the planning for the MMI part of the Alvey Programme.
The Proceedings were later published by Springer-Verlag in 1986.
I'm also a heavy user of an X-only window manager (xmonad). Learning about this makes me wonder if I should start learning how to work with another window manager so I'm not caught off guard when something stops working.
I'm really attached to stumpwm, which is also X-only. We have been hearing reports of the pending death of X for at least the past decade, so I'm never certain how seriously to take them, but I've kept an eye on what looks like the closest thing to a wayland implementation of stumpwm (https://github.com/malcolmstill/ulubis).
You could try to transition to the sway wayland compositor (which is largely compatible with i3wm) or maybe to https://github.com/letoram/arcan which is a lot more programmable than sway
My take home from this thread now with a few weeks to reflect is that if we want a real window manager that is maintained, then it will have to be a community effort. This will be hard because training an involvement with x11 has been left entirely to Red Hat too long. It will be a lot of work and money to transfer knowledge from the few who do know it.
With this in mind, are there other projects that are at risk due to stewardship by a single corporation? How can we prevent this in the future?
Software is dependent on hardware. If the hardware changes, as it did since X11 was invented, the software stack has to change too. You cannot prevent progress in technology.
Xorg is in a better state now that it has ever been. Therefore it makes no sense to consider Red Hat some villain that adopted Xorg and ruined it. And had Red Hat not been there, Wayland would still have been invented.
Xorg developers switched to Wayland. Why? Because they know it is the future. There is no need to train Xorg developers.
To be clear, I am not in any way faulting RH for this, the excellent state of X11 is indeed due almost entirely to their technical and financial support. This should be celebrated.
My point is that having such support can lead the community to assume that such support will always be there an neglect the need to maintain a robust multi-party collaboration around such vital projects.
Nothing prevented X11 frkm adapting with hardware. In fact, commercial implementations did just that. It's XFree86 that was running pretty much on original framebuffer-oriented code dump.
Also, if you actually run on server that was more directed towards modern hardware, like Xsgi, GTK3 will simply flip out because it has heavy assumptions that pretty much break.
Slightly off-topic, but is anyone here using Wayland with a touchpad?
The last time I tried libinput, the behavior was horrendous compared to a synaptics.
2 finger scrolling had a ~2mm threshold before it activated and wouldn't compensate for the lost distance once the scroll started, making it feel very unnatural.
I couldn't find an option to enable true palm detection either. There was an option to disable the touchpad for a fixed time every time the keyboard was used, which is abysmal compared to how it works normally (ignoring taps and drags if they start too close to an edge).
It's hard to believe that this is the current state of things, but I did search for a way to set it up with no success. Is there a way to get those to work?
I've been using Ubuntu with Gnome on Wayland for the past year or so on my Dell XPS 13.
I actually ended up switching to Wayland, because of how much better the touchpad worked with it. To be fair though, Dell has an entire team dedicated to supporting the XPS 13 Developer Edition, so the drivers are highly curated compared to most devices.
My experience was on an XPS 9570, not sure if there's a large gap in driver / support.
Out of the box synaptics was pretty bad, but after tuning the palm detection zone size and a couple timings (long press, etc) it feels amazing. When I don't have to test convoluted UIs, I can go days without plugging the mouse.
Don't you have the small "lag" when two fingers scrolling?
I've been super happy with the touch pads performance out of the box, I literally haven't modified any settings related to it.
Are you referring to this bug[1] in Chromium? I think these are growing pains associated with transitioning to Wayland, but this really doesn't bother me too much. I will say that the scrolling is otherwise very smooth and almost perfectly everywhere else in the UX.
I hope they manage to standardize the majority of protocols. Last time I've checked GNOME wasn't happy but the other projects were, surprisingly, cooperating heavily.
"Why I'm not going to switch to Wayland yet"[1] argues that:
"for simple things using the compositor's screen shot tool is fine. But what if I don't like the screenshot tool for my compositor of choice? My experience with the GNOME screenshot tool (granted this was pre-wayland) was that it wasn't as good as, say, shutter, which has a lot of options, let's you easily crop and edit the screenshot from inside the screenshot tool etc. And then swaygrab doesn't even (currently) have an option to capture a rectangular region."
There are some other things this article mentions which are important to me, like Wayland's lack of color picker tools and xdotool functionality.
Wayland seems just way too immature for me to use for now. X works great, does everything I want, and I see no compelling reason to switch.
The wlroots developers are working on protocol extensions to make stuff like screen grabbing work consistently across WMs: https://github.com/swaywm/wlr-protocols
So what are the BSDs planning on doing? Do OpenBSD/FreeBSD just plan on maintaining their own forks of Xorg, or are there plans to try and port Wayland to these operating systems as well?
FreeBSD user too, I'd like to know that as well. Not going to lay blame on RedHat for doing what is in their own best interest, but do find it troubling that the *nix world has had a major piece of software made redundant on the decision of one company/maintainer.
As an end user my perspective, perhaps incorrect, is that since the late 90s development of application and system software has become increasingly siloed with less diverse views and the involvement of fewer stake holders.
I suppose that would happen based on math alone (more linux end users with no interest beyond a working web browser) but I worry where that path might lead.
A tad concerned about this because my system still heavily relies on X and there really isn’t an alternative for my applications (yet). Wayland isn’t supported for many applications.
Does XWayland not solve your issues? It’s a pretty good (seamless) compatibility layer. Of course, it still involves running an X server, but applications can switch to Wayland one at a time and you can use both simultaneously.
Imho XWayland is "too good". There's little incentive for software like Chrome to port over to Wayland if it runs on it seamlessly via a compatibility layer.
Are there any major x11 features that wayland lacks other than remote operation over a network? I think at one point it locked the refresh rate to 60hz, has that been changed?
HiDPI. It's somewhat unusual for a top end laptop to have 1920x1080 screens these days, and Wayland doesn't support 4k very well. You can upscale, but it looks like garbage, or it has a 2x mode, which draws image and such slightly too large, or you can deal with everything being tiny.
For gaming, the latency is bad and always on vsync is horrible. They've fixed the 60Hz lock though, fortunately. This only applies to gaming though.
My primary issue with Wayland isn't quite so much that it lacks these things, it's that they seem to be of the opinion that there's only one way to do things, and any other use of the visual display system is wrong. I just have this weirdly oppressive feeling about my inability to configure it the way I want it, the same feeling I get from using Windows. Just... let me be me. I get that this is a non-technical complaint, and that it ultimately boils down to "it gives me the heebie jeebies" which isn't an argument, but it's still how I feel about it.
I've been running Wayland gnome-shell on Fedora since Fedora 25, on a Dell m3800 and now a Dell XPS 15, both with 4K internal displays. I often connect to external monitors or projectors, either 1080p or 4K.
Wayland has better HiDPI support than X and has for a long time.
In order to make X work in any reasonable way you need to configure X to render at 4K all the time on all outputs and downscale using xrandr on 1080p displays. X cannot handle different DPI settings.
>For gaming, the latency is bad and always on vsync is horrible. They've fixed the 60Hz lock though, fortunately. This only applies to gaming though.
So there is no way for a game to bypass wayland and write directly to the fullscreen framebuffer? You're just stuck with an extra frame or two of latency?
That is the problem right? Whenever there is a functionality that you could get with X11 by tweaking configurations files you have to write you own complete new Display Server in Wayland to have that functionality.
This is especially true for people that prefer low latency over tear-free rendering.
I don't get this thread. On one hand, people complain (wrongly) about Wayland being too monolithic. Now, people complain about there being competing Wayland implementations with different feature sets. What's it gonna be, guys?
Those complaints aren't actually opposed but perfectly complement each other.
Wayland requires* everything to be monolithically built into the display server (which is also the window manager), which means if I want to use a new WM (say, XMonad) I need to reimplement all of this stuff. Want screenshots? Build it into your WM! Want redshift? Build it into your WM! The result is that development effort will be wasted reimplementing "competing Wayland implementations" stuff that no-one actually wants.
Compare X11, where I could run an Xorg server together with any of a number of lightweight window managers, and the window manager is only responsible for, y'know, managing windows, and determining how the window decorations look. Xorg handles everything else, allowing a robust marketplace of competing WMs to arise.
* Unless/until they finally give in and standardise protocol extensions for out of process window managers.
>I don't get this thread. On one hand, people complain (wrongly) about Wayland being too monolithic
? what? you mixed systemd with wayland maybe, everyone knows that wayland is a protocol, this is repeated 100 times in each thread , everyone also knows that most of X features are not part of wayland
I don't think any of those problems can be solved solely in the compositor - they all require co-operation between it and the actual apps. Which means, in practice, that you'd probably have to create an entirely new version of the Wayland API for creating and managing windows, add it to all the compositors, and get all the apps to use it too, on top of all the other work involved in fixing those issues. (This is the only real way to extend the protocol and has already been done a few times - the most recent one I remember was to add the ability to minimize windows.)
This is up to individual compositors and toolkits.
> Thousands of applications
Like what? Any that rely on X specific behaviour run via XWayland.
> Consistent performance
Wayland in theory should be faster than X, but again, this depends on compositor.
> Redshift
Gnome has night-light on Wayland already. KDE I think just added it.
> Global keybindings
There are proposals to allow better negotiation of key bindings. But I think it's a bonus that applications can't listen into keys when they're not in focus.
I always wonder how people use their linux desktops when they claim that wayland works. Why aren't you using anything like wmctrl, xdotool, xprop, xbindkeys, xterm? Or non-Gnome DE? Don't you need custom keybindings for multiple keyboard layouts with caps/scroll led indication? Don't use wacom devices or anything like it?
Wayland ecosystem is literally decades away from being usable and it's very unlikely to survive those decades beyond maybe being a niche project for some special purpose devices, not linux desktop though.
> Why aren't you using anything like wmctrl, xdotool, xprop, xbindkeys, xterm? Or non-Gnome DE? Don't you need custom keybindings for multiple keyboard layouts with caps/scroll led indication?
I have used Linux desktops for years and have quite literally never used any of those things (except for non-Gnome DE, which was slightly less "don't make me think" so I eventually went back to gnome). Yes, even xterm. Other tools might have been shelling out to those other utilities you mentioned for me, I guess?
Point is, there are at least some reasonable use cases that don't engage with that stuff. I truly don't know how much of the display stack of my current distro is X or Wayland-related code; I have never had reason to care. I don't have a dog in this particular fight, but there are lots of users who use Linux desktops not because they are customizable, modular, or whatever, but because a) it's a free OS, and b) it's similar to environments we target for development at our jobs. Despite the vocal-ness of customization advocates, I suspect that the vast majority of desktop Linux users are "dark matter" that fall into this category.
I've used Wayland and Gnome ever since they became available in Arch Linux, and I've never needed or missed any of those things. Yes early on there were some annoying bugs but this is life on the bleeding edge ;) These days it all just works.
There's a large group of users that goes to great lengths to customize their desktop experience. This kind of change understandably frustrates them. But it's important to understand that many of us just don't care very much.
Can it run gnome-terminal and Firefox? Can I switch resolutions and do multiple displays? That covers a vast majority of what I ever need from a graphical desktop. Beyond that I don't really care how the sausage is made.
I haven't touched any of those tools in a decade or more. I just haven't needed to? The Linux desktop is so much less demanding than it was through the 90s and early 2000s. You don't need to know about any of that stuff to make it work, to customize things, set your favorite key settings (e.g. making Caps Lock useful), to have reasonable hotkeys, etc.
I would argue that not having to use those tools to have a happy working environment makes it further advanced, rather than behind.
A lot of the things that Stack Overflow claims you need X tools for can also be accomplished using sysfs. Without knowing the details of what you're doing I can't give you any specifics.
I maintain an embedded Linux system which uses Wayland as a compositor for Qt. I use sysfs to EG turn off cursors and set the display mode so I can blit a splashscreen. And sysfs can be used to rotate the console, change display modes, etc..
On my desktop I don't use any of the tools you're listing and haven't needed then for almost 12 years. The wm handles that for me.
As for those other things you mention (xprop, etc.), I don't even know what they are, so I assume I've never used them.
I think you underestimate the degree to which desktop Linux users just want a working system with sane defaults. Maybe you personally value having all the knobs to twiddle, but I'd suggest you're an outlier.
Sway/wlroots aren't written in Common Lisp, like StumpWM.
The decision to leave window decorations up to the client application rather than the window manager is incomprehensible to me. It just doesn't make any sense so far as I can tell.
I don't want to run GNOME or KDE to get redshift behaviour; I want to run a window manager written in Lisp, with a console, emacs & Firefox. Everything else is a distraction from getting work done.
I agree that not allowing clients to listen to all keys by default is good; I disagree that not providing a mechanism for the end user to grant such access is a good idea. It's his computer — let him run what he wants.
>But I think it's a bonus that applications can't listen into keys when they're not in focus.
There are applications that do not run as a window, in present I have a global shortcuts that run a bash or python script, I know I am a power user so I will need a way to whitelist my use case.
Yeah, that is the problem with Wayland: the solution to many missing features you get out of the box with X largely depends on the environment you are using.
I don't run a desktop environment in X. I run xmonad and terminals. I don't want a desktop environment in wayland, but it seems like that's the only way to get reasonable functionality because it can't be added piecemeal, only monolithically via the compositor.
This is not enough. you fix this use case but there are many others, maybe I want to make a voice/video chat app or a remote desktop app, I will need full control of the display and keyboards not some gimped implementation that is DE specific.
Wayland needs to offer full access to "power users applications" if not we end up where Windows will be more friendly for the users that need this type of applications
So you either have to use one of the prescribed Wayland window managers, or completely rewrite your favourite window manager (remembering to include all of the bonus responsibilities it has now for everything the Wayland developers decided was "out of scope and the compositor's responsibility").
Alright, five years of work and debugging later...
> > Consistent window decorations?
> This is up to individual compositors and toolkits.
So it won't be consistent.
> > Consistent performance
> Wayland in theory should be faster than X, but again, this depends on compositor.
So this is the part where you go back to the window manager you wrote in step 1, and fix all the performance problems (collectively, over and over again, for every window manager that exists), right?
> > Redshift
> Gnome has night-light on Wayland already. KDE I think just added it.
Great! What if I'm not using Gnome or KDE? Right, I need to code that into my window manager too...
I don't use remote operation (as imagined by X11) at all. But I do use non-X11 screen-sharing systems all the time.
Ask yourself, "why do all of the major GUI systems - in daily use by billions of people, evolved over the past 40 years - opt to offer remote operation only as an optional add-on?"
And I use remote operations all day every day. As does the entire department with hundreds of people. Don't dismiss other peoples works flows without understanding them.
(I ran Linux + X11 as my sole desktop from 1996-2000. And then on and off again for years in VMs.)
I'm not dismissing the need. I'm just saying that the other systems work pretty well, and this is not something that needs to be baked into the core of the display system.
I used and loved Screenhero on the Mac for years before Slack swallowed them up. (Now there's tuple.app, made by some of the same people.)
Prior to that, I used Windows RDP for years. That worked pretty well, too!
Sorry if I misread your comment and thanks for editing it to make it clearer.
But why are you so opposed to having this featured backed into the display system? After all you point out yourself that all display systems and up with solutions for this work flow. Why not do it right and build it into the display system instead of adding it after that fact (and usually in an inferior qay)?
I'm against baking it in precisely because it's not the right thing. Network transparency in X11 led to a design that didn't scale to meet the needs of the vast majority of users:
"The X11 protocol was never meant to handle graphically (in terms of bitmaps/textures) intensive operations. Back in the day when X11 was first designed computer graphics were a lot simpler than they are today.
Basically X11 doesn't send the screen to your computer, but it sends the display-instructions so the X-server on your local computer can re-create the screen on your local system. And this needs to be done on each change/refresh of the display.
So your computer receives a stream of instructions like "draw line in this color from coordinates x,y to (xx,yy), draw rectangle W pixels wide, H pixels high with upper-left corner at (x,y), etc."
The local client isn't really aware what needs to be updated and the remote system has very little information on what the client actually needs, so basically the server must send a lot of redundant information that the client may or may not need.
This is very efficient if the display to be rendered consists of a limited number of simple graphical shapes and only a low refresh frequency (no animations and such) is needed. Which was the case back in the days when X11 was first developed.
But modern GUI's have a lot of eye-candy and much of that needs to be send from the remote system to your client in the form of bitmaps/textures/fonts which take quite a lot of bandwidth. And all sorts of eye-candy includes animated effects requiring frequent updates. And the displays keep getting bigger too, twice as wide/high is 4x the number of pixels.
Of course, over time, enhancements to the X11 protocol were made to optimize this as much as possible, but the basic underlying design is, in essence, simply not well suited to the demands of the kind of GUI's people nowadays expect.
Other protocols (like RDP and VNC) are more designed to let the remote system do all the hard work and let that system decide which updates to send to the client (as compressed bitmaps) as efficiently as possible. Often that turns out to be more efficient for modern GUI's.
Neither method is perfect and can deal with every situation equally well. There is no such thing as a single display-protocol that can do well under every conceivable use-case.
So in most cases you just try all protocols that are supported between your local client and the remote server and use the one that gives the best results. And in some cases there is no choice and you just have to make do with whatever is available.
Most protocols do allow some performance tuning, but many of these settings are server-side only and not available to the average user. (And configuring them properly is a bit of an arcane art. A lot of sys-admins won't be willing to mess with that.)
In most cases the easiest way to improve performance (sometimes quite dramatically) is by switching to a more simple desktop environment with less eye-candy and forego the use of background images."
Sorry, but a stack overflow answer (that doesn't even seem to be written by you) that give a bad overview over how X11 does it is not an explanation of why it can not be done. Or why it should not be done given that all platforms develop abilities to do exactly what I ask for at some point anyway.
First, I never claimed that the quote was written by me! That's what the quote marks and link are for.
Why have ALL of the other major systems - Windows/ReactOS, NeXT/Mac, BeOS/Haiku, iOS, Android, etc. - not bothered with implementing network transparency in the core of their display systems?
I'm sure that you could build a new system that has this network transparency feature. But I wonder how one would avoid failing the way that previous efforts failed:
Blub paradox. If you habitually confine yourself to a single computer, remoting never becomes a tool you reach for. NeXT had remoting because it was meant for power users in labs full of closely networked workstations.
Not sure what you're talking about here, Haiku's window system has network transparency, with drawcall-forwarding, even. In fact there is an HTML5-based remote client for it!
Probably not. The entire reason that the remote-drawing works is that the draw calls are passed through all the way up from the widgets and controls layer; for apps that don't use those (e.g. Qt apps, etc.) it's just a dumber VNC (we could make it less dumb, but, a little starved for time right now.) So then you would be porting the entire Haiku toolkit to Linux ... and at that point why not just use Haiku? We have our own kernel for a reason :)
As far as I know not into the wayland protocol itself. The devs have been pretty adamant that wayland is 'just the compositor' and left the rest as an exercise for the community, or the toolkit developers. So for instance there is https://github.com/foss-project/green-recorder but as far as I can tell on wayland it only works for gnome, and requires dbus to work (wat?!).
* Screen recording is a privileged operation in Wayland which is left to compositors to implement as they see fit.
* Screen recording apps are therefore largely just nice frontends on whatever primitives the compositor exposes to do screen recording.
* GNOME speaks dbus and exposes those primitives as dbus services.
* So you need to speak dbus.
You can't really avoid dbus without going really far out of your way these days. Unless something better comes along it's going to be the Linux messaging passing system.
The wlroots developers have proposed protocols for screen grabbing etc. which are already supported by some window managers (e.g. Sway) and by some applications.
swaywm supports screenshots and screen capture for sure [0]. I haven't personally done anything with screen capture on sway, but I can definitely confirm that grim works for screenshots.
And the latency depends on the compositor. I believe Arcan achieved 0 frame compositor latency with Wayland. I think the latest gnome-shell Mutter is there as well.
If your application can draw its frame and get its notification into the compositor before the compositor begins to draw, there's no need for any additional delays.
If your app cannot complete a draw in the time left in the frame by the compositor, then it'll be one frame behind.
How is gaming in it? Steam games? 3D accelerated games? Games in wine? Fullscreen games? Different resolutions? Alt tabbing between game and desktop? Old games and new games? WebGL?
arbitrary mouse (or "mouse") button remapping in libinput (my understanding is that libinput itself supports it, but due to some security-related reasons it has to be configured on behalf of the logged-in user and the UI is just not there, or something). the upshot is that I cannot use my trackball the way I'm used to, and that seriously sucks.
Can Wayland currently handle several monitors with different DPI? And by “handle” I mean, being able to resize (blurry, I know) windows that are not DPI-aware transparently when I move them between monitors. What Windows has been able to do for years now.
It’s not Wayland’s job, as I understand. GTK3 does it, I think Qt does it, and I’m not sure about anything else. Firefox has issues with menu positioning but works fine. Because X11 applications still talk using the X protocol to XWayland, which doesn’t handle DPI scaling, there’s no guarantee of it working.
Applications (and their toolkits) should be able to signal to Wayland whether they can handle DPI scaling or not, and if they can’t, Wayland should forcibly resize the windows (ie paint them bigger) like Windows does.
They decided against it because no one (except maybe xterm) was using X11's drawing APIs anymore. The whole point of Wayland is to get rid of all the unused legacy that lingers in X11.
Again -- XWayland is Xorg. The xorg-server codebase contains a number of components, one of which is a front end called DIX (Device Independent X) that handles protocol-level stuff, another of which are several backends collectively called DDX (Device Dependent X) that handle drawing to the actual video hardware or other graphics layer. One of these DDX backends is 'xwayland'. So the XWayland server incorporates the entire Xorg server code base except for the hardware-specific back ends. And so if Xorg languishes unmaintained... well, sucks to be an XWayland user.
So your best bet is to commit now to switching, entirely, to Wayland. The aim is to get everyone off of X altogether as quickly as possible, and then stop shipping X (including XWayland) altogether.
>> So the XWayland server incorporates the entire Xorg server code base except for the hardware-specific back ends.
Well doesn't that mean XWayland will be easy to maintain since it has no hardware dependencies? X can completely stagnate so long as the Wayland back-end is kept up to date with any changes there - and no, people don't want to be changing Wayland protocols because every compositor would have to be updated. I think XWayland will be around for a while yet, but I'd rather have things run native Wayland.
Ostensibly, yes, but that would clash with the official narrative ("X is broken and bloated and needs to be deprecated"). If they can maintain Xwayland indefinitely, they can maintain a stub DDX that works with kms indefinitely also. What they are telling us is that they want to get rid of the entire thing. Pointing out that Xwayland would be relatively easy to maintain would be stating the emperor has no clothes.
> So your best bet is to commit now to switching, entirely, to Wayland. The aim is to get everyone off of X altogether as quickly as possible, and then stop shipping X (including XWayland) altogether.
Ah, it is the "32bit isn't needed anymore" all over again. Yeah, this will happen around the same time you wont need to ship 32bit support on Linux :-)
If the sole maintainer of the 32-bit code base decided to fuck off, and no one else is stepping up, your choices are to either stick with your old OS forever, or get a 64-bit machine.
Maybe nobody is stepping up because they have no reason to do so and nobody even asked for anyone to do so? If the message was "hey, we want to stop working on Xorg and move to Wayland, is anyone interested in taking over?" instead of "Xorg is dead, we wont work on it, nobody will work on it, ever" then perhaps some people would show up?
Besides, even though i am not interested in taking over Xorg, if something breaks on it i'm more likely to look into fixing it than switching my environment to Wayland.
That's not the germane point. There are applications dating back over 30 years written for X11. Thousands upon thousands of them. Compatibility matters.
Wayland may turn out to be a flash in the pan. Those applications aren't going away. I have many open source and proprietary applications which I fully intend to carry on using. They don't, and won't, support Wayland.
Unless you're using a modern toolkit which will be modified to support Wayland, you're not going to be using Wayland in a hurry. Because you'll be tied into the X11 world.
I can't help but notice that Xweston and wwlnest are both projects that seem to have gone years without an active contribution. Sometimes projects become mature and don't need additional work, but I would be stunned if anything to do with Wayland had reached that point yet.
> Sometimes projects become mature and don't need additional work,
You mean like X?
It seems to me the Wayland thing has a lot of support by people who think lack of changes in and of itself makes X completely unusable. That the software world must exist in a state of perpetual rewrite or it sucks.
I predict my minimal X setup will remain usable for a long time.
I seek to be enlightened, not to flame: what exactly is so wrong with X that it can be retrofit to work better? Wayland is taking a while, which is expected sure, but for now, X works fine although with issues.
You didn't actually answer the OP's question. There is nothing Wayland does that could not have been done via the X11 extension mechanism. X has already been reinvented several times --- few things use the core protocol nowadays, after all! For example, XRender for text is now universal.
The thing that's different about Wayland is that for reasons that nobody ever explains clearly at a technical level, this time, a few influential people created a new and (IMHO, inferior) protocol instead of continuing to use the X extension mechanism.
Security? Isolate the X session data to individual client connections. Buffer management? Do it like GLX does. Want server-side decorations for some reason? What's stopping you? Sure, ICCCM isn't trivial, but it exists, and it's flexible.
Apple evaluated X11 when they were just starting work on Mac OS X.
This comment by Mike Paquette (designer and author of Quartz!) explains why it was better to dump X11 and start over:
"> they don't even use X at all!
What Apple is providing is an Apple-original window system that is graphics model agnostic, as well as a vector drawing system that maps very well to PDF, which is a sort of PostScript without the non-graphical operators. This is packaged under the name 'Quartz' for easy reference by Marketing types.
The window system is designed to support both buffered (like an offscreen PixMap) and unbuffered windows, and is graphics model agnostic, working equally well with QuickDraw, OpenGL, the Quartz drawing engine, X11, and third party solutions, and managing window geometry for the Classic, Carbon, and Cocoa environments. The server portion is a hybridization of screen arbiter and compositor models (and if that's all Geek to you, don't worry about it).
The Quartz drawing engine supports drawing primitives similar to the graphics primitives that might be found in the DPSClient single-operator primitives library for X and NeXTSTEP. There are no math and flow control primitives, as these can be done more efficiently in the native
compiled code. There are no DPS or PS wrappers, as this optimization for server-side graphics is not needed in the Quartz client-side graphics model.
The operations provide imaging and path construction and filling operations as well as some interesting other bits that map well into the direction that 2D drawing is headed. (See Longhorn, or the X raster projects.) The drawing engine can output to rasters (like a window!), as well as PS and PDF streams to feed printers. The Mac OS X printing system takes advantage of the capabilities of Quartz to support all sorts of printers, and make the life of printer driver developers much, much easier.
Things we'd need to add/extend in X Window software (protocol+server+manager+fonts+...):
1) Extend font server and services to vend outlines and antialiased masks, support more font types, handle font subsetting.
2) Extend drawing primitives to include PS-like path operations.
3) Add dithering and phase controls.
4) Add ColorSync support for drawing and imaging operations, display calibration
5) Add broad alpha channel support and Porter-Duff compositing, both for drawing in a window and for interactions between windows.
6) Add support for general affine transforms of windows
7) Add support for mesh-warps of windows
8) Make sure that OpenGL and special video playback hardware support is integrated, and behaves well with all above changes.
9) We find that we typically stream 200 Mb/sec of commands and textures for interactive OpenGL use, so transport efficiency could be an issue.
So, yes, it looks like we can use X for Quartz. All we need do is define extensions for and upgrade the font server, add dithering with phase controls to the X marking engine, add a transparency model to X imaging with Porter-Duff compositing support, make sure GLX gets in, upgrade the window buffering to include transparency, mesh warps, and really good resampling, and maybe augment the transport layer a bit.
Ummm... There doesn't appear to be much code left from the original X server in the drawing path or windowing machinery, and it doesn't appear that apps relying on these extensions can work with any other X server. Just what did we gain from this?
Oh, yeah. My mom can run an xterm session on her desktop now without downloading the Apple X11 package, a shareware X server or buying a software package.
That comment is my point. Those things that the Apple guy mentions having to add to X were in fact added to X. They're called XRender. There's no reason we couldn't have added more extensions.
Nobody has given me a straight answer for why Wayland-style buffer management couldn't have been an X11 extension. Nobody.
I'll miss x11vnc which is currently the only way on linux (afaik) to have a network connected monitor and between which I can seamlessly move my windows.
I don't really like Wayland, and I think X is better. There are some problems with X, and I made some ideas how to fix it, including getting rid of a lot of the extensions of the X protocol, but making now XBell is like XkbBell instead, moving many things out of the protocol, and also some other stuff.
Sway has become one of my favorite pieces of software. The only issue is that Sway or Wayland seems to have trouble implementing an easy to use redshift/nightlight. Based on my research into eye strain and talking to my optometrist I keep redshift on 24/7 and I can’t go back.
Id din't know that Sometime ago I was trying to make an old nvidia video card work with my pc in With Fedora. One of the instructions was to open the xorg. change the drive to replace the noveau drive, or something like that. I think it wasant't even 2 years ago.
Then I just installed ubuntu and isntalled with apt-get install nvidia-wathever-version. And worked like a charm for me.
When you need to configuration, best practice is to put only the necessary configuration in a configuration snippet inside /etc/X11/xorg.conf.d/ without creating a full xorg.conf file.
I intend my next PC build to last ten years, like my last one has. I will be including an AMD GPU, because I don’t want to be left high and dry by NVIDIA when Wayland takes off.
Nvidia is mostly used on Linux, as most of their GPUs are installed in datacenters for deep learning acceleration purposes. More than that, Nvidia is the only major video hardware manufacturer that keeps up to date drivers for other free UNIXes, e.g. FreeBSD.
Datacenters are not desktops though, which I assume is what X11 is used most for. But, like I said, my comment was just my impression regarding the matter, based on the blog post I linked. Thanks for your reply! :)
Most of their GPUs are still used for gaming and graphics acceleration, but you are right that their cards and drivers generally work well on Linux and FreeBSD. I think the main issue is that they have no intention of making their driver open-source which understandably irks the Linux community.
> "Nvidia is mostly used on Linux, as most of their GPUs are installed in datacenters for deep learning acceleration purposes."
Supposing that's true, it has little to do with the reality that Nvidia blatantly doesn't give a damn about the linux desktop uses of their cards.
These constant comments about the issue blaming everybody but the actual party responsible (nvidia) really are becoming tiring. If you can't be bothered to research whether hardware is well supported by linux before you buy it, then what the heck are you even doing in this industry? Merely searching "is nvidia total shit on linux" online should have been enough to inform your hardware purchase. You and nvidia made this bed, stop blaming other people when it's not comfortable. Not a single time in my life have I purchased an nvidia card. You have the choice too.
GNOME, which is what powers the RedHat desktop, supports nvidia on Wayland just fine, and KDE got nvidia/EGLStreams support since KWin 5.16. So at least 2 major DEs do, though quite a few others don't, and have taken a principled stand not to (Sway for example won't).
When it comes to RedHat though, they're all-in on GNOME anyway so keeping X around for nvidia users isn't really a concern for them when their desktop experience supports nvidia on Wayland just fine.
It defaults to X, because it works very poorly with Wayland. Until very recently, it was impossible to run Gnome with Wayland at all (even though it theoretically supported Eglstreams), as there were many kernel errors generated by gnome-shell, etc. Now it sort of works, however, lags and freezes make it completely unusable in most cases. I am testing it every Fedora release. It _is_ possible to use Gnome with Wayland, I managed to get it working with Arch Linux once. However, it took me 4 days, manual patches (literally editing the WM code myself to fix some issues as suggested by the community) to get the configuration right, and the next rolling Arch update broke it beyond repair.
Again, I am not against Wayland. Wayland API is beautiful and I want it adapted more and sooner. It is just sad that me and many other users are not getting it. Developers blame Nvidia, because they do not open source their driver (which is a valid concern), however, there are many other binary blob drivers in mainstream Linux kernel, and everybody except hardcore open source apologists are OK with that. Why Nvidia is different, I don't understand.
Developers don't blame nvidia b/c they don't want to open source their driver. They blame nvidia for insisting on others shouldering the tech debt of EGLStreams (a closed, proprietary nvidia-only solution with no documentation whose implementation can't easily be tested and verified by compositor authors) instead of enabling the use of GBM with their drivers like every other Linux graphics driver. nvidia doesn't have to open up their drivers for that, AMD's closed source driver (so not the Mesa stack), supports GBM just fine.
Kwin 5.16 might technically support EGLStreams, but I have an Nvidia card and tried it out - after setting env variable to even make it try to work, Plasma crashed during startup, kicking me back to login. I tried again, and it managed to start up, but then plasma-shell crashed when I tried to search something. Performance varied dramatically, the mouse cursor switched from 60fps to 1fps randomly.
(Update: I upgraded to the freshest Fedora and tried to enable Wayland again. It finally worked! After 3 years of trying to make it happen! It is not perfect (the screen flickers occasionally, input latency is slightly increased), but it is at least usable. The window movement, animations, etc, everything is buttery smooth, as it should be. So, my rants are over, Wayland FTW.)
Actually, I wouldn't bother if Nvidia would go out of business, as I find it disgusting how they ignore consumer wishes since ages. I mean, in the past I have bought GPU hardware from Nvidia, AMD, and Intel but for a Linux desktop, the experience has been so much better when not using Nvidia, that I try to avoid their products nowadays.
This won't help current owners of Nvidia hardware (as myself), but you shouldn't blame RedHat for issues that obviously have been caused by Nvidia.
They don’t ignore consumer wishes. They have the best GPGPU toolchain around. That’s why I only use NVIDIA. Nobody supports developers for High-Performance GPU computing like they do. They also support PC gamers very well.
Probably depends on the use-case: I mean, weren't the cryptocurrency people using AMD before they switched to custom FPGAs? But for sure, for the TensorFlow users, there doesn't seem to be a viable alternative to NVIDIA and there must be a reason why CUDA is more popular than OpenCL.
What I mean with ignoring consumer wishes is that they neither offer an open-source driver like AMD and Intel do and they do not even support others in doing so, like the Nouveau people. Subsequently, as a Linux user with NVIDIA hardware, you are either stuck with the buggy binary driver or the feature-incomplete Nouveau driver while AMD and Intel users have relatively stable and feature complete open-source drivers.
Why do you think Linus Torvalds likes Nvidia so much?
1163 vs 216 lines of code! CUDA is really straightforward; ROCm is still low-level and incomprehensible (unless you are a C++ expert who happens to love low-level details).
For as long as X is considered to be a reasonable alternative to Wayland, NVidia is unlikely to suffer as a result of its drivers being unable to support Wayland. If Wayland becomes the only practicable option for graphics in Linux, then NVidia is going to be forced to support it to keep their Linux market share.
But that is the point of the post -- X maintainers will abandon X and it will not be considered "a reasonable alternative" anymore. Nvidia supports Wayland just fine with their Eglstreams API, but some WM/drivers developers make a principled stand of not using it.
This is troublesome, and it feels like "progress" for the sake of "progress" rather than actual improvements. The X Window System is time-tested technology, while Wayland, from what I've seen and read, is not.
This isn't a good thing to me, especially considering NVIDIA hardware doesn't seem to be well supported. Someone come along and correct me, please.
X11 is a pile of hacks. Have you ever tried to read the source code? When the X11 devs design a new system and stop maintaining X11, it's likely that X11 is not in a good shape.
>This isn't a good thing to me, especially considering NVIDIA hardware doesn't seem to be well supported. Someone come along and correct me, please.
It's the other way around: NVIDIA doesn't support GBM, the standard for buffer allocation.
And come to think of it, isn't that a really weird problem to have? On X, the compositor (responsible for actually drawing all the windows on the screen) and the window manager (responsible for deciding how to arrange the windows and what their title bars/borders should look like) can be separate components, so I use i3 to arrange my windows and compton to draw them without tearing.
On Wayland, my impression is that the compositor and the window manager have to be built into the same program, so you run into silly situations where you can run GNOME on your graphics card but you can't run a basic tiling window manager. This also makes it a lot harder to create a new window manager, since you also have to write a compositor and test it on every graphics card.
That last issue could be solved with a reusable library that provided basic compositor functionality for window managers. Except every member of the Wayland community has independently had that idea and written their own, each with a different, incompatible interface and support for different graphics cards. So either we need a meta-library that abstracts away all the different libraries, or we need a standard compositor interface like X had.