Long story short, training requires intensive device-to-device communication. Distributed training is possible in theory but so inefficient that it's not worth it. Here is a new paper that looks to be the most promising approach yet: https://arxiv.org/abs/2301.11913
That's brilliant, I would love to spare compute cycles and network on my devices for this if there's an open source LLM on the other side that I can use in my own projects, or commercially.
Doesn't feel like there's much competition for ChatGPT at this point otherwise, which can't be good.
On the generative image side of the equation, you can do the same thing with Stable Diffusion[1], thanks to a handy open source distributed computing project called Stable Horde[2].
LAION has started using Stable Horde for aesthetics training to back feed into and improve their datasets for future models[3].
I think one can foresee the same thing eventually happening with LLMs.
Full disclosure: I made ArtBot, which is referenced in both the PC World article and the LAION blog post.
> Doesn't feel like there's much competition for ChatGPT at this point otherwise, which can't be good.
Facebook open sourced their LLM, called OPT [1]. There's not much else, and OPT isn't exactly easy to run (requires like 8 GPUs).
I'm not an expect, so I don't know why some models, like the graphics generation we've seen, are able to fit on phones, while LLM require $500k worth of GPUs to run. Hopefully this is the first step to changing that.
I've seen Petals mentioned several times before and I don't think it's the same thing. Correct me if I'm wrong, but it seems Petals is for running distributed inference and fine-tuning of an existing model. What the above poster and I really want to see is distributed training of a new model across a network.
Much like I was able to choose to donate CPU cycles to a wide variety of BOINC-based projects, I want to be able to donate GPU cycles to anyone with a crazy idea for a new ML model - text, image, finance, audio, etc.
The labelled data seems more of a blocker than anything else. As far as I'm aware, the actually NN running the models are relatively simple, it's the human labor involved in gathering, cleaning, and labeling data for training that is the most resource intensive.
The data is valuable yes, but training a model still requires millions of dollars worth of compute. That's a perfect cost to distribute among volunteers if it could be done.
Another idea is to dedicate cpu cycles to something else that is easier to distribute, and then use the proceeds for massive amounts of gpu for academic use.
Do we need a SETI@home-like project to distribute the training computation across many volunteers so we can all benefit from the trained model?