Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work on an embedded linux system that has 256 MB of RAM. That can get eaten up really fast if every process has its own copy of everything.


~15 years ago my "daily driver" had 256MB of RAM and it was perfectly usable for development (native, none of this new bloated web stuff) as well as lots of multitasking. There was rarely a time when I ran out of RAM or had the CPU at full usage for extended periods.

Now it seems even the most trivial of apps needs more than that just to start running, and on a workstation, less than a year old with 4 cores of i7 and 32GB of RAM, I still experience lots of lag and swapping (fast SSD helps, althougn not much) doing simple things like reading an email.


I'm on a Mac with 32 GB of Ram.

According to Activity Monitor, right now:

• 4.26 GB are being used by apps

• 19.52 GB are cached files

• 8.22 GB are just sitting idle (!)

Now, I'm not running anything particularly intensive at the moment, and I make a point of avoiding Electron apps. I also rebooted just a few hours ago for an unrelated reason.

But the fact is that I've monitored this before—I very rarely manage to use all my RAM. The OS mostly just uses it to cache files, which I suppose is as good a use as any.


and I make a point of avoiding Electron apps

I do that personally too, but in a work environment that is unfortunately not always possible --- and also responsible for much of the RAM usage too.


Slack is the biggest culprit IME. If there was a native client, I'd take it like a shot



> ~15 years ago my "daily driver" had 256MB of RAM and it was perfectly usable for development

What I failed to mention was that the rootfs is also eating into that (ramdisk). In your case I'm guessing your rootfs was on disk.


Oh my god, just try running a modern OS on a spinning rust drive. It's ridiculous how slow it is. It's obvious that modern developers assume everything is running on SSD.


Are you sure? I've been running Linux for a long time with no page file. From 4gb to 32gb (The amount of RAM I have now) and have literally only ran out of RAM once (and that was because of a bug in a ML program I was developing). I find it very hard to believe that you experience any swapping at all with 32gb, much less "lots".


You've likely not experienced the amazing monstrosity that is Microsoft Teams:

https://answers.microsoft.com/en-us/msoffice/forum/all/teams...

There's a screenshot in there showing it taking 22GB of RAM. I've personally never seen it go that high, but the 10-12GB of RAM that I have seen is absolutely ludicrous for a chat app. Even when it's initially started it takes over 600MB. Combine that with a few VMs that also need a few GB of RAM each, as well as another equally-bloated Electron app or two, and you can quickly get into the swapping zone.


How is that possible, 22GB? Fucking electron. You would think, at least, that Microsoft would code a fucking real desktop app.. I hate web browsers.


I also experience the same thing with Mattermost (the client also being an Electron app). The memory bloat usually comes from switching back and forth from so many channels, scrolling up to load more chat history, and lots and lots of image attachments (and of course, the emoticons).


scrolling up to load more chat history, and lots and lots of image attachments (and of course, the emoticons).

I remember comfortably browsing webpages with lots of large images and animated GIFs in the early 2000s, with a fraction of the computing power I have today. Something has become seriously inefficient with browser-based apps.


You said yourself you managed to find a case where you ran out of memory. Why do you find it "very hard to believe", knowing nothing about his use cases, that his job doesn't involve exactly the sort of situations that consume vast amounts of RAM. Why do people insist with such conviction that "it doesn't happen to me, therefore it's inconceivable that it happens to someone else, doing something totally different than what I'm doing, into which I have no insight". Baffling.


> Why do you find it "very hard to believe", knowing nothing about his use cases, that his job doesn't involve exactly the sort of situations that consume vast amounts of RAM.

Probably the GGP said they experience lag while "doing simple things like reading an email." Now, maybe GGP meant to add "while I'm sequencing genes in the background", but since that was left out I can see how it would be confusing! :)


That's fair. Good point.


Then don't statically link. "Emebedded systems" requirements shouldn't dictate "Desktop" or "Server" requirements.


You should look into fdpic as format to store your binaries in. It think i might lessen your concerns.


So dynamic linking makes sure they each dependency is loaded into memory just once?

Could someone estimate how much software nowadays is bloated by duplicated modules?


Note that when using static linking, you don't get a copy of everything, just everything you actually use.

It doesn't alter the fundamental point: shared libraries save both persistent storage and runtime memory.


> Note that when using static linking, you don't get a copy of everything, just everything you actually use.

Which is a significant fraction of everything even if you call simple like printf.

> It doesn't alter the fundamental point: shared libraries save both persistent storage and runtime memory.

I fail to see the argument for this. Dynamic linking deduplicates dependencies and allows code to be mapped into multiple processes "for free".


Have you measured it? How much is dynamic linking saving you? How many processes are you running on embedded systems with of 256MB or RAM?


Ok, so I just measured with a C "hello world".

My dynamically-linked executable is 8296 bytes on disc. My statically-linked executable is 844,704 bytes on disc.

So if I had a "goodbye world" program as well, that's a saving of about 800KB on disc.

Now one can argue the economics of saving a bit under a megabyte in a time where an 8GB microSD card costs under USD5 in single quantities, but you can't argue that it's a relatively big saving.

At runtime, the dynamic version uses (according to top) 10540 KB virtual, 540 KB resident, and 436 KB shared. The static version uses 9092 KB virtual, 256 KB resident, and 188 KB shared.

I haven't investigated those numbers.


256MB of RAM is a fairly large amount–this is how much iPhone 3GS had, for instance. It relied on dynamic linking to system libraries heavily and ran multiple processes.


That proves the point: with multi-GiB memory nowadays you can fit many times over all the space that the iPhone saved using dynamic linking.


I would rather not be limited on my laptop to iPhone apps from ten years ago.


My second sentence was arguing for dynamic linkage (I called them "shared libraries", but I think that's a fairly common nomenclature).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: