Weka use D for making systems that store Petabytes of data in production. They don't really like the GC kicking in on that situation. So they write the first draft of code using the GC, and then for production make it not use GC - pre allocating largely. If you really care about latency I guess you can't really afford to use malloc necessarily.
But for the rest of us, D is not really comparable with Java but people tend to think if it the same way. I don't use classes myself (I had one but a guy didn't like it and removed it, though one or two in library code may have crept back recently) but allocate structs on the stack. The latter is more idiomatic generally in D. Depends how you count it, but at 120k sloc, maybe 200k if you include the periphery.
It's easy to allocate without GC using the std.experimental.allocator and emsi containers. Regional heaps, free lists, whatever hybrid model you want.
See excel-d for one example.
If you keep the rest of your heap small, say below 200Meg most people will be fine.
If you don't want to use D, blame the docs and lack of examples - still not as good there, but way better than before and all the unit tests are editable and runnable now. But I think the GC thing is more FUD than a real objection for most people.
People on embedded systems use a custom runtime, and people who want to avoid the GC use the compiler to disallow GC usage in their entire program by using the @nogc annotation.
There are other less extreme solutions, such as the ability to have some threads be registered with the GC and others not. Also, you can just find your bottle-necks and mark those specific functions as @nogc. Some people also turn off automatic collections and manually trigger them only when it's ok to pause.