Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not that much lower, 295W vs 355W, and for LLM inference VRAM bandwidth is the main bottleneck. But the price is ridiculous.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: