Nim compiles to C, so can at least make use of fast C compilers. But the solution really seems to be to just use a fully dynamic language like Common Lisp that can compile, load, and redefine (hotswap) everything incrementally so you're not having to start from nothing all the time.
Compiling to C is not a compile performance advantage. If anything it slows you down as described in the article (header files). It's just not as bad as C++.
On big projects, linking times are really the bottleneck.
Sure, incremental linkers exist [1], but all of them tend to do O(n) work on every invocation. Source control software, even developed explicitly to scale [2], behaves the same way. Makefiles as well; Tup [3] tries to solve this, in vain since the linking step is still holding everything up. There is so much inherent inability to scale built into our tools. So on big projects everything grinds to a halt as you cannot buy enough developers and hardware to keep up with O(n^2) forever.
It is very fast for certain projects, but link times on some can make up for that. I have a toy program using vibe.d that takes around 1s to compile and 10s to link. I could probably make that faster by switching to gold, but I didn't yet.
I'm not sure how anything over 100ms could be considered "fast compile" for a toy program... Basically Pascal, or simple c compilers like tcc should be the benchmark imnho.
That's not to say I don't allow for trade-offs.. I gladly trade some milliseconds (or, grudgingly seconds) for features.
Most of the compile time of that particular program is spent parsing HTML templates and converting them to D code. This is all done by D code provided by a library that runs at compile time as part of a template function instantiation in my code. So the compile time is inflated a bit in this case.