I would say that the real reason is because "it works". As simple as that.
The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.
Take for example the Newcomen engine, with an abysmal efficiency of half a percent. You needed 90 times more fuel than an engine today, so it could only be used in the mines were the fuel was.
It worked badly, but it worked. Later came efficiency.
The same happened with locomotives. So bad efficiency at first, but it changed the world.
The first thing AI people had to do is making it work in all OSes. Yeah, it works badly but it works.
We downloaded some Clojure editor made in java to test if we were going to deploy it in our company. It gave us some obscure java error in different OSes like linux or Mac configurations. We discarded it. It did not work.
We have engineers and we can fix those issues but it is not worth it. The people that made this software do not understand basic things.
We have Claude working in hundreds of computers with different OSes. It just works.
Your examples of engines are less about "it works" as more that it does a thing we couldn't do before and it works better than the previous thing. But neither of those are especially true of react.
React was an instant hit because it had the facebook brand behind it and everyone was tired of angular. But ultimately, react has worse outcomes for developers, users, and businesses. On the web, react websites are bloating. They run slower, their javascript payloads are larger, and they take longer to load.
Your suggestion -- that it works and then it gets more efficient later -- would make sense if we lived in a world where react moved off the virtual dom model. A virtual dom is a fine first attempt or prototype but we can do better. We know how. Projects like SolidJS do do better. React has not caught up, but it is still very popular. This whole "It worked badly, but it worked. Later came efficiency" thing is complete nonsense.
And there are loads of businesses that started off with an angular app, started to migrate to react, then started to migrate to react hooks, now switching to whatever the latest methodology is. Time and again you find these products, always endlessly migrating to the new thing, most of them never finishing a migration before beginning a new one. So these products end up being a chimera of four different frameworks held together with pain.
This isn't a good outcome for businesses, or for users, and it's not a good developer experience. react is stagnant and surviving off of being the default or the status quo and supported by tech companies that have long since stopped innovating and subsist on rent seeking. Developers choose react because nobody was ever fired for buying IBM and because they can look busy at their job, and because they buy a new phone and laptop every year with the latest hardware that can compensate for the deteriorating software they ship.
> React was an instant hit because it had the facebook brand behind it and everyone was tired of angular.
Ok, but why was everyone tired of Angular? Sure, web frameworks are examples of Fad Driven Development to the extreme, but Angular.js, was pure unmitigated ARSE.
Made ten bindings on a page? That's 100 cross connections. Made 100 two-way bindings? that's 10000 connections.
Clicked one way through fields A, B then started typing, they show same data. Clicked through fields A and C, now they are bound but B isn't. Clicked B then C, congrats all three of your bindings suddenly start filling in.
It was a combination of shitty performance scaling and unintuitive Angular data flow that primed everyone for React to take over.
They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task.
There's nothing "good" about electron. Hell, there are even easier ways of getting high performance cross platform software out there. Electron was used because it's a default, defacto choice that nobody bothered with even researching or testing if it was the right choice, or even a good choice.
"It just works". A rabid raccoon mashing its face on a keyboard could plausibly produce a shippable electron app. Vibe-bandit development. (This is not a selling point.) People claiming to be software developers should aim to do better.
> They could have done better. They chose the path of least resistance, putting in the least amount of effort, spending the least amount of resources into accomplishing a task
You might as well tell reality to do better: The reality of physics (water flows downhill, electricity moves through the best conductor, systems settle where the least energy is required) and the reality of business (companies naturally move toward solutions that cost less time, less money, and less effort)
I personally think that some battles require playing within the rules of the game. Not to wish for new rules. Make something that requires less effort and resources than Electron but is good enough, and people will be more likely to use it.
Shaming the use of electron? I'll do that every day and twice on sunday. Same with nonsense websites that waste gigabytes on bloat, spam users with ads, and feed the adtech beast. And I'll lay credit for this monument to enshittification we call the internet at the feet of Google and Facebook and Microsoft.
Using electron and doing things shittily is a choice. If you're ever presented with a choice between doing something well and not, do the best you can. Electron is never the best choice. It's not even the easiest, most efficient choice. It's the lazy, zero effort, default, ad-hoc choice picked by someone who really should know better but couldn't be bothered to even try.
It might be a strange thing to say, but Java is still viable alternative route. You can build a nice and fast cross-platform desktop application on it today. The language was designed for this kind of things. The entry barrier is quite high though.
As a recent toe-dipper into linux (now running Arch on a powerful minipc and KDE plasma) I'm shocked at how little progress has been made on the native UI side.
As far as I can tell after a quick Google, you can't share your Qt UI with the browser version of your app. Considering that "lite" browser-based versions of apps are a very common funnel to a more featureful desktop version, it makes sense to just use the UI tools that already work and provide a common experience everywhere.
The same search incidentally turned up that Qt requires a paid license for commercial projects, which is surprising to me and obviously makes it an even less attractive choice than Electron. Being less useful and costing more isn't a great combo.
> you can't share your Qt UI with the browser version of your app
You can with WASM (but you shouldn't).
> Qt requires a paid license for commercial projects
It doesn't, it requires paid license if you don't want to abide with (L)GPL license, which should be fair deal, right? You want to get paid for your closed-source product, so you should not have any reservations about paying for their product that enables you to create your product, right? Or is it "money for me, but not for thee"?
> Being less useful and costing more isn't a great combo.
Very nice, but now explain why you are talking about using Qt to create apps, whereas grandparent talks about experience with apps created with Qt.
I looked up the WASM Qt target and it renders to a canvas, which hampers accessibility. The docs even call out that this approach barely works for screen readers [0], and that it provides partial support by creating hidden DOM elements. This creates a branch of differing behavior between your desktop and browser app that doesn't have to exist at all with Electron.
It should go without saying that the requirements of the LGPL license are less attractive than the MIT one Electron has, fairness doesn't really come into it. Beyond the licensing hurdles that Qt devotes multiple pages of its website to explaining, they also gate commercial features such as "3D and graphs capabilities" [1] behind the paid license price, which are more use cases that are thoroughly covered by more permissively licensed web projects that already work everywhere.
On your last point I'm completely lost; it's late here so it might be me but I'm not sure what distinction you're making. I guess I interpreted dmix' comment generally to be about the process of producing software with either approach given that my comment above was asking for details on alternatives from the perspective of a developer and not a user. I don't have any personal beef with using apps that are written with Qt.
Please do continue to waste energy on doing something that will do nothing but allow you to feel superior about yourself. In fact, you will probably waste more energy than Electron ever has.
I agree with you, I even think it's shameful. When I saw it was elctron, I sighted so long I almost choked.
Can't even cmd+g nor shift+cmd+f to search, context menu has nothing. Can't even swipe, no gestures etc.
ELctron is better than nothing, and I'm grateful, but it tastes bitter.
As for performance, somebody if I remember correctly, once asked here "what's the point of 512GB RAM on the mac Studio?"
And then someone replied "so you can run two electron apps".
Nah, some developers are lazy, that is all, lets not dance around the bush with that one.
Most of those Electron folks would not manage to even write C applications on an Amiga, use Delphi, VB, or whatever.
Educated on node and do not know anything else.
Even doing a TUI seems like a revelation to current generations, something quite mudane and quite common on 1980's text based computing of Turbo Vision, Clipper and curses.
Let's assume for the moment the developers are doing about the best they can for the goals they're given. The slowness is a product decision, not an engineering problem. They are swimming in cash and could write a check to solve the problem if they cared. Evidence is they have other priorities.
The sad reality is everyone wanting to have fancy looking pages/apps as quickly and easily as possible.
And now, the web (and increasingly desktop) is littered with the lowest common denominator of platforms with all sorts of crazy optimizations that still can’t be as snappy as a Windows 95 app on a 200mhz / 16MB desktop.
At this point we may as well just use electron and nodejs for fighter jets and missile defense systems. Surely it’s fast enough for that, too.
At least it seems like a lot more apps are cross-platform than before. I wouldn't call the native devs lazy for not making a Mac version of their Windows app.
Early games frequently took the approach of inventing an interpreted machine code in which the bulk of the game would be written, with an assembly interpreter that would need to be rewritten for each target IA and modified for the specific peripheral mixes of each target machine.
The approach runs slower than a game written directly in assembly, but the cost to port to different architectures is much lower.
Sort of like Electron trades off native performance and look-and-feel to make multi-platform apps much more achievable.
IMO the OS vendors failed everyone by refusing to even attempt to agree on a common API for UI development, paving the way for web browsers to become the real OS and, ultimately, embedded browsers to be the affordable and practical way to be cross platform.
That’s the hard swallow. End users only care about the UX and it working. They don’t care how succinct your code is. They indirectly care about its performance.
Realize, though, that just grabbing a frame buffer is not a thing anymore. To render graphics you need GLES support through something like ANGLE, vectors and fonts via Skia, Unicode, etc. A web browser has those things. Any static binary bundling those things is also gonna be pretty large.
And JavaScript is very good at backwards compatibility when you remove the churn of frameworks (unfortunately Electron doesn't guarantee compatibility quite as far back)
There is a difference between being in business and thriving.
I've been in a few companies that managed to eke out a living by maintaining a piece of software no one in their sane mind would still maintain. Sometimes a government gig, sometimes the private sector.
Shoutout to my boys that in 2018 maintained a Java 1.3 app. Still going strong to this day (it was migrated to Java 8 last time I checked).
EDIT: ~21 Delphi apps in world! Woohoo! Delphi number #525
Sure, either of them qualifies. I was just pointing out that Delphi (and C, and VB) aren't really widely used in GUI toolkits anymore.
It's not just the difficulty. It's the lack of learning materials, widgets, examples, and whatnot. Debuggers also suck, at least they did when I used Delphi.
What also helps is having a huge behemoth of a corporation improving your GUI toolkit for free (Chromium, although you pay in ad exposure).
Go replace discord, slack, and vscode natively then. If everyone is crying out for the superior missing and obviously better alternative then that's a great opportunity.
Don't use Discord, never plan to. Even if I cared, I would only use the browser as it should be.
Slack only lives on the browser, as it should.
VSCode is only used when there are no SDKs for my editors or IDEs, and I am forced into VSCode.
However contrary to Electron crap, VSCode at least has tons of external processes written in a mix of C++, Rust and C#, and parts of it have moved into WebGL, not the traditional Electron crap application.
What if pursuing that opportunity exceeds the risk appetite / budget of the person you're suggesting this to, by about a billion, and even then has the exact same potential to be corrupted by the organizational dynamics that resulted in the current generation of sloppy designs?
I agree, this is clearly an indictment against LLMs. If LLMs and agents were capable they'd 100% write it natively but they realize the current limitations.
I have tested this and they are very capable, replacing the modern windows calc (38mb mem) with an identical app (ux, looks, features, accessibility, localization) ended up with an app using 2MB memory.
You just need to point it in the right direction (c/rust/go etc) and be harsh with the requirements, especially memory usage.
If I was Microsoft I would use AI for this rather than badly embedding AI everywhere, a lot of power users would be overwhelmed by a win 11 update where OS apps/features dropped mem usage by 90%+
Electron is very easy to deliver a good quality app for everything but absolute power users.
Yes it's horribly slow but it enables rapid experimentation and it's easy to deliver the wide range of UI integrations you are likely to want in a chat-esq experience.
It’s not even horribly slow. It works fine. It’s just a chat program. It’s the right trade off for the job.
Doing more work for no reason is stupid even if you the have money of a small nation.
The inevitable differences between platforms you get with all native everything isn’t a good user experience, either. Now you need to duplicate your documentation and support pages and have support staff available to address every platform. And what’s the payoff? Saving 80MB of RAM? Gaining milliseconds of latency that Joe Business will never notice as he’s hunt and pecking his way through the interface?
I thought we were done with Electron hate articles. It’s so 2018 to complain about it. It’s like talking about millennials and their skinny jeans. Yawn.
I'm dealing with MS Teams, which is for me a chat and video app. It uses 2GB of RAM, which is more thand my local postgresql. It must sit there in the background wasting 2/16 of my laptop or people can't reach me.
And if MS stopped randomly moving things around in the UI with no benefit whatsoever, their documentation could be usefull instead of telling me where I could find some setting 3 years ago.
I've programmed in every native Windows GUI starting with MFC. Used Delphi too. I've even created MS-DOS TUI apps using Turbo Vision.
Compared to the web stack and Electron, native GUI APIs are complete shit. Both for the programmer, but also for the user.
Reactive UI (in the sense of React, Vue, ...), the greatest innovation in UI programming ever, was made popular by web-people.
Unless you write some ultra-high performance app like a Browser, a CAD app, a music production app, it doesn't make any sense to use native GUIs. Takes longer to program, the result is uglier and with less features. And for those ultra high performance apps you don't use the native UI toolkit anyway, you program your own widgets since the native ones are dog shit and slow.
Reactive UIs may have been made popular on the web, where they're an absolute nightmare, but native code does them better still.
Best time I ever had in a job was writing WPF applications in C# using ReactiveUI. Once we really understood the underlying model we were plugging stuff together so easily. It is a really good model, but I can't see how React is a good example of it.
Of course I had lots to complain about then, WPF had bugs, C# has a number of big problems, but it was, overall, very nice.
Don't forget about all the embedded UIs (kiosks, appliances, car infotainment, industrial equipment, ...), those computers are weaker and it makes a ton of sense to use native toolkits there.
They tried to replace our Qt GUI at work in this space with a react based one, the new UI ran like utter shit comparatively, even after a lot of effort was spent on optimisation.
Also, most of the world don't use high end developer laptops. Most of the world is the developing world, where phone apps on low end Android phones reign supreme.
The meaning of native in these discussions is "no web technologies", because Qt gets thrown around and that's as native as to macOS as Electron, just in a different manner.
That's the exact same way in which AAA games are native, which well, they are. As the article itself makes clear, the OS-default toolkit doesn't really have a privileged status on macOS today, any more than it does on Windows or many Linux distros.
Something that isn't touched on as much is that in the time between old-school native apps and Electron apps is design systems and brand language have become much more prevalent, and implementing native UI often results in compromising design, and brand elements. Most applications used to look more or less the same, nowadays two apps on the same computer can look completely different. No one wants to compromise on design.
This mentality creates a worse experience for end users because all applications have their own conventions and no one wants to be dictated to what good UX is. The best UX in every single instance I've encountered is consistency. Sure, some old UIs were obtuse (90% weren't) but they were obtuse in predictable ways that someone could reasonably navigate. The argument here is between platform consistency and application consistency. Should all apps on the platform look the same, or should the app look the same on all platforms?
If I look at the Notion and Linear desktop apps, they’re essentially identical in styling and design. They’re often considered the best of today’s web/Electron productivity apps, and they have converged on a style that’s basically what Apple had five years ago.
IMO that’s a fairly strong argument that the branding was always unnecessary, and apps would have been better off built from a common set of UI components following uniform human interface guidelines.
I do notice those things occupying your "essentially," and your "basically." The success of worse designed stuff is a hard thing to argue against, though.
> OS vendors use everything in their power to make you not want to develop native apps for their platform
I'm not sure this is true. Cocoa remains one of the most amazing libraries ever. Delphi's VCL is the same (Win32, ugh, but the VCL is native, and it's wonderful.)
The difference perhaps is that you need to re-implement. And there is a very good question: if AI boosts productivity so much, why is this not now being done even by AI companies?
I suspect the reason is not that it's not possible, but that native user experience is not a value. Since circa 2012 (the loss of Aqua, Windows 8) native UI as a driving force in user experience by OS vendors lost value; the web replaced it with different UIs per site/app; and the OS experience of generic, boring UI meant that the OS receded into the background. It's only folks who experienced what good OS-driven UI was that value it, and I worry that they don't have a voice.
But Anthropic could, if they chose, demonstrate the value of their AI productivity extremely effectively by building a native version of Claude Desktop for Win/Mac/Linux. And they might do some good for design/engineering trends around UX at the same time as well.
Some random thoughts, since I've had a similar train of thought for a while now.
On one hand I also lament the amount of hardware-potential wastage that occurs with deep stacks of abstractions. On the other hand, I've evolved my perspective into feeling that the medium doesn't really matter as much as the result... and most software is about achieving a result. I still take personal joy in writing what I think is well-crafted code, and I also accept that that may become more niche as time goes on.
To me this shift from software-as-craft to software-as-bulk-product has some similarities to the "pets vs cattle" mindset change when thinking about server / process orchestration and provisioning.
Then also on the dismay of JS becoming even more entrenched as the lingua franca. There's every possibility that in a software-as-bulk-product world, LLM-driven development could land on a safer language due to efficiency gains from e.g. static type checking. Economically I wonder if an adoption of a different lingua franca could manifest by way of increasing LLM development speed / throughput.
> LLM-driven development could land on a safer language
Why does an LLM need to produce human readable code at all? Especially in a language optimized around preventing humans from making human mistakes. For now, sure, we're in the transitional period, but in the long run? Why?
"It has been lost in AI money-grabbing frenzy but a few years ago we were talking a lot about AIs being “legible”, that they could explain their actions in human-comprehensible terms. “Running code we can examine” is the highest grade of legibility any AI system has produced to date. We should not give that away.
"We will, of course. The Number Must Go Up. We aren’t very good at this sort of thinking.
Because the traits that make code easy for LLMs to work on are the same that make it ideal for humans: predictable patterns, clearly named functions and variables, one canonical way to accomplish a task, logical separation of concerns, clear separation of layers of abstraction, etc. Ultimately human readability costs very little.
In a sense they do use their own language; they program in tokenized source, not ASCII source. And maybe that's just a form of syntactic sugar, like replacing >= with ≥ but x100. Or... maybe it's more than that? The tokenization and the models coevolve, from my understanding.
If we do enough passes of synthetic or goal-based training of source code generation, where the models are trained to successfully implement things instead of imitating success, then we may see new programming paradigms emerge that were not present in any training data. The "new language" would probably not be a programming language (because we train on generating source FOR a language, not giving it the freedom to generate languages), but could be new patterns within languages.
> For now, sure, we're in the transitional period, but in the long run? Why?
Assuming that after the transitional period it will still be humans working with ai tools to build things where humans actually add value to the process. Will the human+ai where the ai can explain what the ai built in detail and the human leverages that to build something better, be more productive that the human+ai where the human does not leverage those details?
That 'explanation' will be/can act as the human readable code or the equivalent. It does not need to be any coding language we know today however. The languages we have today are already abstractions and generalizations over architectures, OSs, etc and that 'explanation' will be different but in the same vein.
Well, IMO there's not much reason for an LLM to be trained to produce machine language, nor a functional binary blob appearing fully-formed from its head.
If you take your question and look into the future, you might consider the existence of an LLM specifically trained to take high-level language inputs and produce machine code. Well, we already have that technology: we call it a compiler. Compilers exist, are (frequently) deterministic, and are generally exceedingly good at their job. Leaving this behind in favor of a complete English -> binary blob black box doesn't make much sense to me, logically or economically.
I also think there is utility in humans being able to read the generated output. At the end of the day, we're the conscious ones here, we're the ones operating in meatspace, and we're driving the goals, outputs, etc. Reading and understanding the building blocks of what's driving our lives feels like a good thing to me. (I don't have many well-articulated thoughts about the concept of singularity, so I leave that to others to contemplate.)
This might be a "the grass is greener on the other side" situation because I do a lot more web than native dev, but in my experience native while just as quirky as web will usually give you low level APIs to work around design flaws.
On web it too often feels like you can either accept a slightly janky result or throw everything away and use canvas or webgl.
Here are some recent examples I stumbled across:
- try putting a semi transparent element on part of an image with rounded corners and you will observe unfixable anti-alias issues in those corners
- try animating together an input with the on screen keyboard
- try doing a JS driven animation in a real app (never the main thread feels hopeless and houdini animation worklets never materialized)
I don't think it's that native has nothing to offer. I think that developing (in case of desktop) for 3 different platforms all with own complication of what is native UI is a nightmare. macos has swiftui (incomplete), uikit and appkit, linux in practice gtk/qt, windows winui 3 (fundamentally broken) with WPF and WinForms still hanging around .
> I think that developing (in case of desktop) for 3 different platforms all with own complication of what is native UI is a nightmare. macos has swiftui (incomplete), uikit and appkit, linux in practice gtk/qt, windows winui 3 (fundamentally broken) with WPF and WinForms still hanging around .
Wouldn’t it be a good use of AI to port the same app to several native platforms?
yes it would, but depending on the app it could put you in a ton of hurt.
- AI has gotten a lot better on less popular tech, but there is still a big capability gap between native frameworks an the blessed react + tailwind stack.
- You will get something that is likely in the right shape but littered with a million subtle bugs and fixing them without having intimate knowledge of the plat form is really hard.
Somehow, a CAD program, a 3D editor, a video editor, broadcasting software, a circuit simulation package, etc are all native applications with thousands of features each - yet native development has nothing to offer?
Besides going full native, a Tauri [0] app might have been another good alternative given they already use Rust. There are pros and cons to that choice, of course, and perhaps Tauri was considered and not chosen. Tauri plus Extism [1] would have been interesting, enabling polyglot plugin development via wasm. For Extism see also the list of known implementations [2].
I have been using Tauri for a macOS app I'm making[1] and it has been great. The app is only 11MB and I've had most of the APIs I'd need.
However, there are still some rough edges that have been annoying to work with. I think for my next project I will actually go back to electron. There are two issues that caused me pain:
1. I can't use Playwright to run e2e tests on the tauri app itself. That's because the webview doesn't expose the Chrome DevTools Protocol, and the tauri-driver [2] does not work on MacOS.
2. Security Scoped Resources aren't fully implemented which means if a user gets the app through the app store the app won't be able to remember file permissions between runs [3]. It's not too much of an issue since I probably won't release it on the app store, but still annoying.
But I hope Tauri continues to grow and we start seeing apps use it more.
My team is building a cross platform app with Tauri that is mobile, web, and desktop in one codebase and we've had almost nothing bad to say. It's been great. Also the executable size and security are amazing. Rust is nice. Haven't done as much with it yet but it will come in useful soon as we plan to implement on-device AI models that are faster in Rust than WebGPU.
I find it a bit odd how much people talk up the Rust aspect of Tauri. For most cases you'll be writing a Typescript frontend and relying on boilerplate Rust + plugins for the backend. And I'd think most of the target audience would see that as a good thing.
I working on a project using tauri with htmx. I know a bit uncommon. But the backend part use axum and htmx. No Js/Ts UI. It's fast, reliable and it work well. Plus its easy to share/reuse the lib with the server/web.
I built a vibe-coded personal LLM client using Tauri and if I'm being honest the result was much worse than either Electron or just ditching it and going full ratatui. LLMs do well when you can supply them an verification loop and Tauri just doesn't have the primitives to expose. For my personal tools, I'm very happy with ratatui or non-TUI CLIs with Rust, but for GUIs I wouldn't use it. Just not good dev ex.
I am considering a Tauri app, but still wondering about architecture design choices, which the docs are sparse about. For instance the Web-side may constitute a more full-blown, say NextJS, webapp. And include the database persistance, say SQLite based, on the web side too, closest to the webapp. That goes against the sandboxing (and best-practice likely), where all platform-related side effects are dealt with Platform-side, implemented in Rust code. I wonder if it is a valid choice. There is a trade-off in more ease of use and straightforwardness vs. stricter sandboxing.
At least with Tauri it's easy to both make the choice and change it later if you want to. I think the docs are sparse because it's your decision to make. I've done it both ways and there are pros and cons. If you use the sqlite plugin and write the actual statements on the JS side then you don't need to worry about the JS<->Rust interface and sharing types. Easier to just get going. If you write your own interface then you probably want to generate TS types from Rust. I think a big advantage to the Rust interface way is that it makes it easier to have the web side be dual purpose with the same code running on the web and in Tauri - the only difference being whether it invokes a tauri call or an API call.
I'll note that I have gone a slightly different path for the main app I wrote: I've written adapters on the js side that generate SQL or API calls depending on where the code is running and I wrote my own select/insert/update/delete tauri commands. The reason I ended up with what seems like a hybrid of the approaches I suggested above is that the js side knows more about what it wants and therefore generates SQL/api calls with the appropriate joins and preloads. On the tauri side I wanted to intercept at the data layer for a custom sync engine, which the frontend doesn't need to know about. However, I've ended up at that solution maybe because I added the tauri side after writing for the web.
It may be interesting with event sourcing, having the message bus + eventstore be on the rust side, and SQL projections be exposed in a sqlite db on the web side.
+1 for Tauri, I've been using it for my recent vibe-coded experimental apps. Making rust the "center of gravity" for the app lets me use the best of all worlds:
- declarative-ish UI in typescript with react
- rust backend for performance-sensitive operations
- I can run a python sidecar, bundled with the app, that lets me use python libraries if I need it
If I can and it makes sense to, I'll pull functionality into rust progressively, but this give me a ton of flexibility and lets me use the best parts of each language/platform.
Its fast too and doesn't use a ton of memory like electron apps do.
Also, Rust's strong and strict type system keeps Claude honest. It seems as if the big LLM models have trained on a lot of poorly written TypeScript because they tend to use type assertions such as `as any` and eslint disable comments.
I had to add strict ESLint and TypeScript rules to keep guardrails on the coding agents.
Looks cool, but the phrase 'build applications with the flexibility and power of go' made me chuckle. Least damn flexible language in this whole space.
Electron is business-friendly choice, because users do not refuse using it. It is users' fault. Users support and accept it by using it. Nagging doesn't change anything. As with every bad business, it is users who buy/use and therefore supports continuing its existence.
"The real reason is: native has nothing to offer."
I get it, but this is a bit dramatic.
One of the biggest challenges I've found with using non-native tools (and specifically the various frameworks that let you write JavaScript that compile to Native code) is that there is much less of a guarantee that the 3rd party solution will continue support for new OS versions. There's much less of a risk with that with 1st party solutions.
Additionally, those 3rd parties are always chasing the 1st part vendor for features. Being far behind the regular cadence of releases can be quite inconvenient, despite any advantages initially gained.
On iOS there isn't always a choice to not build something native. For example, I can't install Navidrome as a PWA because Apple doesn't properly support audio playback for PWAs. I ended up writing a client that suited my listening habits.
To read ePubs, however, I was able to write a PWA leveraging epub.js because no native APIs were required.
> On iOS there isn't always a choice to not build something native.
Tangentially, even native can be badly designed and developed, performance wise. Even Apple hasn’t been able to do a good job with the Reminders app (one of the several apps ported to Mac with the same level of negligence that Electron brings in). I use a lot of Reminders and lists in Reminders. It’s janky and poorly coded.
Oh absolutely. I hadn’t touched native development in perhaps a decade (and that was Xamarin before Microsoft acquired them). My initial iterations were rough, but I’m happy with where the app is. Choosing an audio app to try native again likely wasn’t the best choice on my part either.
That's because "native" is still a fucking pain in the ass with few benefits.
Electron can be pretty performant if written well. The difference between Electron and native is that Electron you can write sloppily (more common) or performant, whereas it's in the nature of native to force you down the performant path (which expends a lot more time).
People used to joke about new Javascript frameworks, now with AI things are changing even faster and you really think it's worth investing time into a native app for something whose features are rapidly expanding and changing? That's bad software management.
If they're doing stupid things in terms of performance then by all means they should fix that but Electron isn't inherently bad.
I felt that this article didn't provide strong justifications for some of its assertions.
> Native APIs are terrible to use, and OS vendors use everything in their power to make you not want to develop native apps for their platform.
Disagree. I'm most familiar with Windows and Android - but native apps on those platforms, snd also on Mac, look pretty good when using the default tools and libraries. Yes, its possible to use (say) material design and other ux-overkill approaches on native, but thats a choice just like it us for web apps.
And OS vendors are very much incentivised to make natuve development as easy and painless as possible - because lock-in.
> That explains the rise of Electron before LLM times,
Disagree. The "rise of Electron" is due to the economics of skill-set convergence on JS, the ubiquity of the JS/HTML/CSS/Node stack platform, and many junior developers knowing little or nothing else.
As for the rest: minor variations in traffic light positioning and corner radii are topical but hardly indicators of decaying platorms.
The rise of Electron was purely because you can share the codebase for real with the web app (for lots of apps it is their main focus) and get cross-platform support for free.
Native apps are not bad to develop when using Swift or C#, they are nice to use and their UI frameworks are fine, it's just that it requires a separate team. With Electron you need much less, simple as that.
> As for the rest: minor variations in traffic light positioning and corner radii are topical but hardly indicators of decaying platorms.
I think it shows how important the platform itself is to the company. The system settings app on macOS is literally slow to change the topic (the detail page is updated like ~500ms after clicking).
I personally love to develop desktop apps but business-wise they rarely make sense these days.
> Disagree. The "rise of Electron" is due to the ubiquity of the JS/HTML/CSS/Node stack, and many junior developers knowing nothing else.
with all due respect - hard disagree. in what place on Earth to Junior Devs make these types of decisions?? Or decision makers going “we got these Juniors that know JS so it is what is…”
I don't believe they were implying they would make the decision. It's expensive to have your team learn new skills from scratch, and management won't want to pay for that if they don't have to.
I have been coding for 30 years now and I have never encountered a technical decision like choosing technology (e.g. Electron) for anything important to the company being made with "oh, we must use X because so and so knows X"
Maybe if there was a toss-up between X and Y or something like that but to flat-out pick Electron because you have people that knows JS is madness
I'm thirty+ years in too, and it happens all the time - particularly in smaller operations. Resourcing constraints, disinclination to provide training, tight deadlines, etc.
While that's not what the author meant - in all places on Earth where they people grow up and become powerful enough to make those decisions (but also before that , in their own little apps)
I've been evaluating truly native x-platform GUI frameworks for a while and GPUI[0] from the zed team along with longbridge/gpui-component[1] looks the most promising.
Though I haven't tested it too much other than building small personal tools like a work habit tracker, it's very well documented and looks great. Gave me a good excuse to try the language again as well.
When an article states “ OS vendors use everything in their power to make you not want to develop native apps for their platform “, it’s really hard to take the rest seriously.
Sure, native applications can be hard and slow to develop for a number of reasons, but OS vendors are seriously incentivized to have developers build on their platform.
For desktop specifically, native is just not worth it anymore. For example setting up permissions for every OS is a pain in native, but you get that for free on the web. You can basically do everything native could offer now, even notifications. The only issue is the fat address bar at the top that kind of kills the immersion.
Every thread on here that goes over some threshold of Electron complaints should require commentors to include their machine specs with their post, and it should play an extremely loud incorrect buzzer if it's 8 GB memory or less.
You don't need 8GB of RAM or less to have memory issues. Cursor + Claude Code + Slack + Discord + Spotify + a few Docker containers + YouTube and a few browser tabs is enough to overwhelm a MacBook with 24GB of RAM.
Right now on my machine, 5 whole docker containers, including two DBs and 3 dev servers, are taking up less RAM than Cursor, a glorified text editor.
And have you looked at RAM prices lately? It's possible that 8GB is all some people can afford.
May I ask how old you are? If you are not just naive, this should never be the attitude of someone building software. It's almost a cursed take.
Throughout history we've only gotten more efficient with our technology and it's resources. Software might be the only field where it went the opposite. Now we are insulting users on low end hardware?
The same thing I did well on a 2GB ram windows 7 machine is a latency nightmare on a 16GB windows 11. The same exact task with the same results.
Don't want to completely hate electron. VS Code proved it can be done in a good way, teams showed us the opposite. Like the author said you can build slop in any stack: JS or native it doesn't matter. What matters is care.
This entire industry is cursed with people who don't give a shit, in it for a quick buck, are entitled and lazy. Maybe AI should destroy everything anyway. We had better software when we gatekept who can build it.
You may ask, but given latest advancements in identifying people from trivial information, I'll be a bit cagey about it. I'll say I'm not 20 anymore. :)
I'm certainly not trying to be insulting, but these threads do get more than a bit tiresome. There's very little curiosity in them towards understanding why Electron is so popular; maybe a tendency lately of HN to assume that popular things are bad, or that large companies do things exclusively for the wrong reasons.
Efficiency is coming up a lot in this thread, and almost universally without definition, because the people making claims about it are implied to be measuring the efficiency of the same thing: how fast your software can run on the least possible hardware. But efficiency can also describe how you use time, which as they say is the one thing you can't ever have more of. Or money, which is also rarely infinite.
In this thread I see takes like:
- "They're a huge company, they can afford to do it native!" (or they can write it once across 3 OS and the browser, and spend the time on other things)
- "Electron has bad performance!" (non-native performance isn't the same thing as bad performance, using more RAM than you think it should consume isn't automatically bad performance)
- "The UI is inconsistent with native apps!" (take it up with product and design, remember Skype? Or AIM)
Electron isn't going to take over the bare-metal powerhouse app space anytime soon, but that's not why anyone builds anything with it. It saves dev time, it makes the economics of bothering to support Linux desktop users make sense, most actual customers have more RAM than they need anyway, and you can still call out to native code for the perf sensitive parts.
Like you and the article said it's about caring. There are definitely bad Electron apps. There are bad programmers at every level between the user and the metal. But the scapegoating on here is so predictable I could have predicted half the responses in this thread with my eyes closed given only the title of the article and some of them are borderline against the guidelines in how little they further discussion. I at least hoped my comment was funny, even if it didn't add much either.
The age question was bit of an overreach on my part, sorry about that, I felt like you were one of those new developers who only ever had high end macbooks.
You are right about the economics side and I don't hate electron in particular. Just the general state of the industry. Even native is incredibly slow nowadays while spending 1000x more cycles. It's crazy.
No sweat, I wouldn't poke my head up in threads like these without expecting at least a little friction in return. The first machine I ever used and wrote code on was a hand-me-down former "work laptop" that had been kitted out to run Lotus Notes and basically nothing else. Trying to get anything to run on that forced me to learn how computers work.
I definitely agree regarding the state of the industry as far as performance and care for the user goes across the board, even with the hotrod hardware we have access to now. Maybe it was inevitable with scale and time and I do lament that it's harder to economically justify cranking the most possible out of the hardware. With my original comment I mostly meant to convey a joking exasperation that these threads are rarely even about real profiling or real-world tradeoffs that make for interesting discussion anymore but just rote "Electron bad".
I keep seeing versions of this take too and I don't get it. If it's just snark about how AI is being marketed as an engineer in a box, sure, but unless everyone else is using different agents than I am, they still aren't literally magic. Nobody has a button they can press to just make the whole app cross-platform native, and have it behave the same across platforms, without additional substantial handholding and time investment. And again, purely economically, why would they? When inference still costs money and time that they can invest in having the AI do other things that actually matter to their customers and business?
And unless there've been recent advances in longevity I'm not aware of, individual human devs still have a fixed amount of time on this earth, and there's no end in sight to the demand for software even if we 1000x the speed we can build it at today. They can spend it doing other things instead of futzing around prematurely optimizing chat apps.
That's what I mean, this has nothing to do with Electron or why developers or companies use it. It's just kicking the tires on the marketing. Truth in advertising is always scarce, Red Bull doesn't literally give you wings either.
You and I both know that AI today can't really build a browser or compiler from scratch on par with the cumulative human work put into Chrome or GCC. You could just as easily side-eye these companies like "well, if your AI is so good, why doesn't it build its own hardware and OS and run on that?". Even if they could, why would they do that except as a proof of concept? It'd still be more work and more time for no benefit at all to their customers or their business.
In very recent memory, apps in general being cross-platform was "rarely done". The alternative world without something like Electron is not perfect native applications on every platform, it's choosing to support one or both of the major operating systems that people actually use.
If you've never had a native app skip or a key or lag because you typed too fast, I don't know what to tell you. I've used some real garbage production software written entirely in God's own C that had those problems. I have twelve instances of VSCode open right now, some with huge projects, and cumulatively they're taking up less than 2GB of memory. It's just throwing stones in the wrong direction to put this blame on Electron at this point.
Once upon a time, programmers cared about how performant their software was because they recognized that software was created for the end user.
Now even the good programmers don't give a fuck. Software development is now about burnishing one's resume and hoping for a massive payday instead of making stuff that's useful.
With regards to "we've lost native" isn't local-first the opportunity to bring it back in a paradigm shift that deals a blow to both the browser and cloud vendor hegemonies?
I’ve only built some basic CRUD-type apps in SwiftUI, but I thought it was pretty nice. The APIs seem well thought-out, I needed less code than I would have to do the same thing in React, and the default styling looked quite professional.
Of course, the React ecosystem and community is enormous. I bet if I was building something complex, I would have gotten frustrated building it all from scratch in SwiftUI when I could just reach for a React component library.
But on the other hand the claude app is garbage… https://github.com/anthropics/claude-code/issues/22543
obviously native apps can be garbage too, but I must say electron apps have a surprisingly high incidence of terrible performance issues, unsure if it’s a correlation or causation issue
I imagine the first step would be for them to make a cross platform UI framework that's better than any existing options, and then port claude to it.
Making five different apps just to claim "native" doesn't seem like a great choice, and obviously for now, delivering new claude features takes priority over a native graphics framework, so electron makes sense. But that doesn't mean it'll be on electron forever.
When I complain about a lack of “native” software I pretty much always mean the platform-provided defaults. Not some cross-plaform UI toolkit that happens to be “native” code. Most apps that I see using QT on Mac or whatever would provably be better as Tauri apps.
Respectfully: skill issue. My employer ships software native for Windows, Mac, iOS and Android. Different codebases for all (though they share a lot of common stuff), all maintained by a shockingly small team.
It’s absolutely achievable if you give a shit about your products and I’m long over hearing the bevy of usual fucking excuses from software houses often magnitudes larger than us who struggle to even keep their electron shit working correctly.
Native is shorthand for "integrated into the platform". Lowest-common-denominator stuff that Electron gives feels correct nowhere (looking at you, Slack). The very best cross-platform applications implement their UI using the native platform idioms on the native platform technologies, and share the core of their logic. The current best example I have is Ghostty which feels perfectly at home on either macOS or Linux.
Even if web rendering is the best technology possible, there's still plenty you could hypothetically optimize, like replacing the Javascript with native code, or cutting out unused features to get a smaller download.
Ultimately, Claude having limitations is an issue. They can't just point it at the code base and ask it to make it faster.
You've basically described Flutter and Jetpack compose(for desktop). The problem really does turn into effort to pay off, even if we stayed with JS and the rendering engine, figured out a way to compile JS into native code and completely stripping all of the unused functionality and doing the same thing with the rendering engine. All of that would need to be made, it's not like electron apps literally crash you machine. You have metrics, hundreds of millions of devices running electron apps at a daily basis. Unless you make your own company, I don't think anyone can convince their leadership to take such a decision.
This wouldn't be an issue if they allowed 3rd party apps or priced their API competitively with their subscriptions. Free software normally fixes these types of problems but is prevented in this case.
I have been using Claude Code ironically enough to build native apps via Qt and Rust, the output is impressive. Might give writing an IRC client a shot and put it in GitHub.
great post - let me add that native was forced into some of this by the web:
1. locked up files ==> that's for security, which wasn't an issue in the 1990s.
2. inconsistent look ==> people are embedding browsers inside apps for all sorts of reasons, ruining the "native" UI/UX even if the OS "look" were stable.
It took a while, but once again open source and the web kinda won, though if you like consistency, then I agree it's a pyrrhic victory...
Perhaps a hot take, but I'm glad for electron apps because that also means they will be well supported on linux, which is almost never the target of native development.
Conversely, an app using native toolkits (at least the Windows ones) will have better chances of running fine under Wine. I've recently had the (dis)pleasure of trying to run some .Net monstruosity with Wine, and oh my got did it not work for obscure reasons.
But overall yeah, from a compatibility perspective, nothing beats Electron. I'm not sure we'd ever get an official Discord client on Linux otherwise.
> Some time ago, maybe in the late 90s and 2000s, native was ahead. It used to look good, it was consistent, and it all actually worked
That's just mythical past, in reality you had the full variety of garbage with basic things broken (hello, blurry native text if poor users decide to adjust screen resolution to match his vision, and text is the most basic of the basics of UI). Only today many of those issues are wrapped in a browsers, so many times more inefficient (though, to be fair, making it harder to mess some of the basics through a concentration of dev efforts on one platform)
> the more apps used native look and feel, the better user experience was across apps
This is especially atrociously deep one, that default look of Windows was ugly and the feel - unergonomic, and user experience doesn't magically become better if you stick to the ugliness. And for important productivity apps it also doesn't become better if you stick to the unergonomic - as, you know, there are two sides in a coin, and "familiarity" isn't the only nor even the most important factor.
> The real reason is: native has nothing to offer.
It offers the same things it always has - the potential to be performant and integrated (with better baseline), just as OS god intended (but OS devil intervened)
And the counter here is too shallow
> There’s no technical reason why
There is, bad abstractions that make it harder to be performant is a technical reason for poor performance.
> Web apps can be faster, too, but in practice, nobody cares.
Is just very shallow. "Can" is useless alone, you need to engage with all the other major factors that affect reality, not ignore everything at the level of a theoretical "can"
> What makes you think it’ll be different once the company decides to move to native?
Well, reality? There are literally the same companies with the same apps that were different before switching!
X foundational model Apps UIs are electron apps because all of them are web first and App second and the easiest way to do this is being an Electron app.
What's the advantage of their Electron apps vs using the web? I get that in theory you can use native-only pieces that a browser doesn't support, but in practice a lot of Electron apps are literally the website.
I love Claude but both the desktop and the web apps are incredibly janky when compared with their OpenAI counterparts (I won’t even comment on Gemini’s because it’s also incredibly broken and unreliable).
Start times are atrocious, long conversations literally break rendering (and I’m using the desktop app on a MacBook Pro with an M3 Pro CPU and 36 GB of RAM so an electron app shouldn’t feel so janky so frequently, right?).
IMHO they just don’t care right now (and I get it from their perspective) but I’m pretty sure there’s a lot of low hanging fruit they could fix to make the experience 5x or even 10x smoother. Claude Code (TUI) is also another memory hog and I’ve even had Ghostty literally crash with enough open sessions.
Fortunately Anthropic models are also incredibly so at least I’ve been able to speedrun my own desktop client using Tauri and fix many of the issues I encounter every day. I might release it at some point.
Just saying that Electron is easier to maintain and stuff is pretty much self sufficient. You can't say that native brings nothing to the table. I'd prefer native app over Electron any day.
It's weird for the author to mention Mac window buttons and corner radius as reasons to use Electron, because while the main content of Electron app windows is HTML, the Electron windows themselves and the window chrome are native, with the same buttons and corner radius as other apps on the system.
Electron is a native wrapper for web content. The wrapper is still native.
> Native APIs are terrible to use, and OS vendors use everything in their power to make you not want to develop native apps for their platform.
I'm honestly not quite sure what the author means here.
Web APIs are equally “terrible” in my opinion. In any case, you have to release an Electron app on Mac the same way you release any native app on Mac. The benefit of using web APIs is not that they are non-terrible but that you can share the same code as your website. And of course you can more easily find web developers than native developers. But that has nothing to do with whether or not the API is terrible. It’s just supply and demand.
I’ll take AppKit and autolayout any day over CSS, ugh. CSS is the worst.
> with the same buttons and corner radius as other apps on the system
I just checked: No, the corner radius is different. I'm personally not very bothered by that, but it's just empirically true.
> Electron is a native wrapper for web content. The wrapper is still native.
In my view, the problem isn't that it's a wrapper, but rather that it's that it's a bad wrapper of a bad runtime (i.e. the incredibly bloated JS/web stack).
UIKit etc never made sense to me after years, CSS also didn't make sense, but right out of the box I understood React. And with hooks, it's way less boilerplate than the UIKit ways.
Separate from that, Apple doesn't seem to mind breaking native macOS apps, to the point where most devs treat native code like a liability on Mac but ok on Windows.
it’s about cost. develop once, deploy everywhere. web solved that. web devs are a dime-a-dozen (i’m not saying web dev is bad… just that it’s a saturated market) so the cost is already scraping the bottom of the barrel.
now, compare this to deploying native apps, we need:
- a macOS dev/team
- a windows dev/team
- a linux dev/team
- a web dev/team
- testers for each of those platforms
- coordination between the teams so the UI looks consistent
that’s a 4-5x larger staff
AI is a force multiplier, sure.. but it’s not a 10x force multiplier. it’s 2x at best, but probably 1.5x in practice (and averaged across the board).
Qt is a cross-platform native toolkit. It is native on Linux, looks more native than whatever the current UI clusterfuck is on Windows, and Mac users will complain a bit (at which point you can point out that at least it's not Electron). You can also compile it for Wasm but yeah, probably more of a last resort.
I really doubt that at this point. Developers have learned that everything Microsoft says to do for Windows, since 2012, will be garbage within a few years. Guaranteed.
Learned Silverlight for Windows Phone development? Too bad, it's UWP now. And the XAML is incompatible.
Learned WinRT for Windows 8/8.1 app development? Too bad, it's UWP now. And the XAML is incompatible.
Packaged your App for APPX? Too bad, it's MSIX now.
You learned how to develop UWP apps? Too bad, the User Interface layer has been ripped out of UWP, it's now called WinUI 3, and it doesn't even run on UWP. Better port your UWP app back to Win32 now, I guess. Why did you even learn UWP again?
You went and learned WinUI 3 like we recommended? Well, unlike WinUI 2, it doesn't have a visual designer, and it doesn't have input validation, or a bunch of other WinUI 2 features. So, depending on what your app needs, you might have a mix of UWP and Win32, because WinUI 2 is UWP-exclusive and WinUI 3 is Win32-exclusive and neither has all the features of the other. Progress!
You built your Windows 8 app with WinJS? Well, sucks to be you, rewrite it in entirety, WinJS was scrapped.
You ported your app from iOS with Project Islandwood? Well, again, that sucks. It was brilliant, it made pulling apps over from iOS much easier, but it's dead. Rewrite!
You decided to hang it all, develop for good old WPF, but wanted to use the Ink Controls from UWP? Great, we developed a scheme for that called XAML Islands which made so you could have some of the best UWP controls in your old app. Then we released WinUI 3, completely broke it, and made it so complicated nobody can figure it out. So broken; even the Windows Team doesn't use it and is writing the modern Windows components for File Explorer with the old version.
But of course, that would require WinUI 2, for UWP, inside Win32 which is the main feature of the broken WinUI 3; which means that the Windows Team has a bastardized version of XAML Islands for their own use that nobody else has (literally), to modernize the taskbar and File Explorer and built-in apps like Paint, that nobody who wants to emulate them can borrow. Their apps don't look modern and their users complain? Suckers, go learn WinUI 3, even though our own teams couldn't figure it out.
You wanted your app on the Microsoft Store? Well, good news, package it together with this obtuse script that requires 30 command-line arguments, perfect file path formats, and a Windows 10 Pro License! Oh, you didn't do that? Do it 5 years later with MSIX and a GUI this time! Oh, you didn't do that? Forget the packaging, just submit a URL to your file download location. Anyone who bothered with the packaging wasted hours for no real purpose.
Did I mention Xamarin? A XAML dialect of its own, that supports all platforms. But it runs on Mono instead of the authentic .NET, so you'd better... work around the quirks. Also it's called MAUI now, and runs on .NET now. But that might break a few things so hang around for over a year's worth of delays. We'll get it running for sure!
Oh, and don't forget about ARM! The first attempt to get everyone to support ARM was in 2012 with a Windows version called... No, no, no. Go past this. Pass this part. In fact, never play this again. (If you want to imagine pain, imagine running Windows and Microsoft Office on a ARM CPU that came three generations before the Tegra X1 in the Nintendo Switch. Surface RT ended with a $900M write-off.)
And so on...
Or, you could just ignore everything, create a Windows Forms (22 years strong) or WPF app (17 years strong), and continue business like usual. Add in DevExpress or Telerik controls and you are developing at the speed of light. And if you need a fancier UI, use Avalonia, Electron, React, or Flutter.
Can we please not equate "performant apps with good user experience" with "native apps". One is (user-centric) goals, the other one is just a dogma, propaganda sold by a certain type of consultant.
I really hate Electron, but something is so rotten under macOS that even some of Apple's own native apps are appalling. The Settings and Passwords apps are so bad as to be almost unusable, I'd love to know how and why they're that bad - are they catalyst, or just badly made?
I don't know for a fact, but I'd bet a few digits of cold hard cash it's a SwiftUI rewrite that is to blame. (Any1 in the know want to chime in?)
And yeah, it's terrible. Apple doesn't make good apps anymore.
(This is part of why I think electron does so well -- it's not as good as a really good native app [e.g. Sublime Text], but it's way better than the sort of default whatever you'll get doing native. You get a lot of niceness that's built into the web stack.)
While there are missing features (e.g. ability to merge records), I have to say that Passwords.app is worlds ahead of 1Password since their electron rewrite. System Settings is not the best (mostly because the search is broken), but Passwords is sufficiently good that I haven't bothered looking what it's written using, whereas I can immediately tell with Electron.
Some of their apps do run web views disguised as native applications—Apple Music, for instance.
Passwords works fine for me, Settings does display notorious lag loading icons, but Apple Music is by far the most disgustingly bad “native” app. Everything is slow on that one, everything takes ages to load, everything makes scrolling choke and stutter, everything even looks like a website crammed inside a desktop window, and to top it all off, the feature disparity between mobile and desktop is so large that you can still see remnants of iTunes floating around on desktop while still not being able to sync the entire condition-set of smart playlists between devices. It’s appalling.
But hey, Apple is a small company after all, they must lack the resources to make their once flagship service run decently on these powerful new chips.
It's the incentives and everything is a trade off. Time to market, performance, features: none of these choices are made in a vacuum. Oh, and people like to go home and see their families once in a while.
As a developer of over 35 years, I feel like I hear the same arguments over and over again. "Programmers used to care about performance!" No they didn't, they just had no choice because computers sucked and you had to work on performance or your application would barely run. "Progammers used to care about the quality of their code!" Really. You apparently never worked on legacy systems with years of hacks and spaghetti code that took an afternoon to trace through just to figure out what it was doing.
People haven't changed. Kids aren't lazier these days. The incentives are always just to ship as fast as possible. Performance will be dealt with when and if it is so bad that the customer complains and not a moment sooner.
When I was much younger I fancied myself a "craftsman" of software. But any "craft" I was able to bestow on my software was in spite of the surrounding incentives not because of them. Software is closer to assembly line work than craftsmanship and LLMs are just driving that point home faster and harder than ever.
I still love software development after all these years but it's entirely because I love solving problems and computers still fascinate me the same as they did when I got my first TRS-80 Color Computer at age seven. Nobody that's not a programmer cares as long as the software does what they need it to and does fast enough that they don't start wonder why they have to use this piece of crap software in the first place.
With today's processors and speeds, there really is no difference in performance of these Electron apps vs Native and anyone disagreeing just dislikes the JS ecosystem
Based on "anyone disagreeing just dislikes the JS ecosystem", I feel like you might not want to grace me with a response, but I disagree /somewhat/.
Electron and web technology generally is certainly more performant than it once was, and I think people do malign Electron specifically a bit too much. VS Code continues to be the anti-example. It's always rather surprising it's just a web view, even on lower end hardware. (A several year old raspberry pi for example)
(Please note, I said "surprising it's just a web view", not "it's more performant than it could be if built differently".)
I think the main difference people tend to experience is a lack of care. I would say, for reasons I am NOT sure are causal, electron apps do seem to tend towards worse experiences on average in my experience. I think on the web, a quick flash of unstyled content or that ghost of the element you accidentally dragged instead of clicked are seen as just minor issues, because they're expectations of the web. If things go REALLY wrong, I have a whole rock solid toolbar above the app that lets me refresh if I think I'm in some infinite loop, or the URL bar I can look at if I'm not sure what page I was just redirected to. The back button promises to return me to where I was before. The browser is caging-in applications for me, so it's fine if they're a bit rowdy.
But using an application that is pretending to NOT be a web browser, seeing any web-quirk feels like I'm staring at rusted rebar in a cracked concrete bridge. The bridge still works, but now I'm aware of the internals of it and maybe that makes me feel a little more uneasy about standing on it. There is no back button if something goes wrong, and if there is, the app itself is rendering it. It's of course possible to hide that reality from me, but you need to care about sealing up all the cracks that let me see the rowdy internals.
To be fair, maybe that's just me that feels that way. And I do rather like the JS ecosystem.
I disagree because yes I dislike the whole JS ecosystem and the language itself. But also because Electron apps in general are resource monsters and while some are better than the others, Claude Desktop is definitely not one of them. Hell even their website will crash on Firefox very often.
The thing people miss is isn't not that there aren't downsides (power, memory, disk size, dependency ecosystem size etc etc) it's that they're still completely outweighed by the upsides of write-once-ship-all for authors.
This is a good thing. Native was always a place where gatekeeping and proprietary crap sprout and thrive.
It can't die soon enough. Doesn't all have to be Electron, but web tech for the win and force everything to the browser unless you'd like to write it twice.
The first thing you need when you make something new is making it work, it is much better that it works badly than having something not working at all.
Take for example the Newcomen engine, with an abysmal efficiency of half a percent. You needed 90 times more fuel than an engine today, so it could only be used in the mines were the fuel was.
It worked badly, but it worked. Later came efficiency.
The same happened with locomotives. So bad efficiency at first, but it changed the world.
The first thing AI people had to do is making it work in all OSes. Yeah, it works badly but it works.
We downloaded some Clojure editor made in java to test if we were going to deploy it in our company. It gave us some obscure java error in different OSes like linux or Mac configurations. We discarded it. It did not work.
We have engineers and we can fix those issues but it is not worth it. The people that made this software do not understand basic things.
We have Claude working in hundreds of computers with different OSes. It just works.
reply