Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you have either PCIe slots or risers you can put them in the one system.

llama.cpp will let you run inference remotely across different systems but I suspect this would be far too latent to be worthwhile. If you have three systems already then it would cost you a few minutes to test it.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: