>I am using Jupyter Notebooks for what they are normally used for: machine learning experiments.
I'm interested to know what kind of problems you may be having doing machine learning. Are you doing that as part of a team or for fun?
We're making something in that space[0]. It's our machine learning platform because we've been doing ML projects for many years and are building this to help us. Does the description in that post solve some/all/none of your problems?
That sounds amazing. Right now I am just learning about CNNs and I just bought a gaming laptop specifically so I wouldn't have to rely on Paperspace and vast.ai for everything.
So at the moment I am probably going to try to take advantage of the laptop a bit more. In the future if I have a contract or am trying to get a ML contract I may take advantage of some of your features which sound like a big advantage. How does the detection and automatic saving of models work? Seems pretty hard to do that. Does it work with the latest keras for example? And so I can just skip down to the cell that uses the trained model? To be honest I haven't used it much but I guess I thought if I committed the docker process after doing the training then I would already be able to use the trained model when I ran that image again.
My real goal is eventually to actually build a relatively novel real-time computer vision system for a simulated robot. So at some point I am not sure that Jupyter is going to be right. But I like it so maybe I can stretch out it's utility.
What we mean by automatic model detection is that you don't have to add experiment tracking code to your notebook to "log" most of the usual stuff such as model, parameters, metrics; we do that for you so you don't clutter your notebook. You can do it explicitly if you want to or if you're trying to log something in particular, but you mostly don't have to think about it, or generate experiment names, or worry about where you are logging your models.
- You use a notebook to write code to train and evaluate your model
Then you can monitor your deployed model's performance on a live dashboard.
>So at some point I am not sure that Jupyter is going to be right. But I like it so maybe I can stretch out it's utility.
Generally speaking, you use the notebook to train models and you use these models to return predictions. In the workflow I described above, the models produced are deployed and interacted with through requests.
I am using Jupyter Notebooks for what they are normally used for: machine learning experiments.
But it might be a good idea to try to expand to other use cases since its such a powerful concept.