For all of its usefulness in the good old days of rusty disks I wonder if virtual memory is worth having for dedicated databases, caches, and storage heads. Avoiding TLB flushes entirely sounds like a huge win for massively multithreaded software and memory management in a large shared flat address space doesn't sound impossibly hard.
This is the kind of debate that has been going on surrounding virtual memory forever[0][1]. If you can keep everything in memory, then you're golden. But eventually you won't, and you'll need to rely on secondary storage.
Is there a performance benefit to be had by managing the memory and paging yourself? Yes. But eventually you will also consider running processes next to your database, for logging, auditing, ingesting data, running backups, etc. Virtual memory across the whole system helps with that, especially if other people will be using your database in ways you can't predict. As for the efficiency of MMUs and the OS, seems like for almost all cases it's "satisfactory" enough[1].
I guess things like mshare could be extended to the entire process address spaces and the kernel could avoid TLB invalidation on context switches between them. Core affinity could be used to keep other programs from scheduling on the cores intended for processes sharing the whole address space.
The jump in address sizes starts to get too unwieldy. 32 bit addresses were ok, 64 bit addresses start to get clunky, 128 bit would be exorbitant for CPU real estate. There's a reason AMD64 still only supported 40 physical address bits when it was introduced, and later only expanded to 48 bits.
The reality is there will always be a hierarchy for storage, and paging will always be the best mechanism to deal with it. Because primary memory will always be most expensive, no matter what technology it's based on. There will always be something slower, cheaper, and denser that will be used for secondary storage. There will always be cheaper storage. And its capacity will exceed primary, and it will always be most efficient to reference secondary storage in chunks - pages - and not at individual byte addresses.
I don't really see what those two things have to do with each other. When you don't use mmap, you manage the disc<->ram storage virtualisation yourself. Hardware paging, then, is pure overhead. The parent doesn't argue against layering of storage media, nor against chunking in general. Only against mmus as a mechanism for implementing it.
The 'paging' is implemented in software, not in hardware. This is how databases not using mmap already work, so mmus are already pure overhead for them.