Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
zozbot234
17 days ago
|
parent
|
context
|
favorite
| on:
Microsoft and OpenAI end their exclusive and reven...
SOTA models are reportedly MoE, not dense.
bigyabai
17 days ago
[–]
A 5T MoE model is still bottlenecked by streaming weights from SSD, in addition to compute bottlenecks during prefill and decode.
zozbot234
17 days ago
|
parent
[–]
True but a cluster built on pipeline parallelism can naturally stream from multiple SSD's in parallel. That probably makes offload somewhat more effective. And you also have RAM caching available as a natural possibility.
bigyabai
17 days ago
|
root
|
parent
[–]
You won't be RAM caching much of anything with experts that are 220b parameters worth of layers.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: