How do you make it load on two GPUs or does llama.cpp does it automatically? I have a setup with a threadrippper and a RTX3090 and a Titan RTX. I haven't had the time to set it up so that's why I have been using my Mac.
llama.cpp doesn't use the GPU at all. The genius *.cpp (whisper.cpp, llama.cpp) projects are specifically intended to optimize/democratize otherwise GPU only models to run on CPU/non-GPU (CUDA, ROCm). Technically speaking the released models are capable of running on GPU via standard framework (PyTorch, TensorFlow) support for CPU but in practice without a lot of optimization they are incredibly slow to the point of useless, hence *.cpp.
You want something along these lines (warning: unnecessarily potentially offensive):
Llama.cpp takes advantage that LLaMa 7B is a tiny, very optimized model. It would run in anything, and very fast. I really doubt you can run the 30B or 65B models at acceptable speed on a CPU at least for a couple years. (I'm ready to eat my words in a couple weeks)