Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been waiting for a while for the tech to catch up, but it should be more or less possible now to use Whisper and an LLM to run a virtual assistant locally. Virtual assistants always seemed like a privacy nightmare to me so I never pulled the trigger.

But if it's all local? It would be a great product. No internet unless I turn on a physical switch, no ads, no monthly fee. Just do the thing I want you to do. Maybe at the moment it will still need to talk to my GPU. But we're getting there.



You can duct tape together Whisper and GPT4All in a couple dozen lines of Python.

It wasn't bad; I'm on a 3070 so it was slow, but tolerable (a couple seconds of latency up to 5 or so seconds for the full Whisper+Mistral7B pipeline).

I named mine Jeeves, so at any point I could just "Ask Jeeves" (lol) and it would talk back to me. The TTS spoke slower than I would prefer, but I probably could have fixed it. I also should have prepended the prompt with something to encourage it to be brief. It often started reading out a couple of paragraphs of text.


Unfortunately you’d probably want a subscription to newer models that are trained on more relevant and new data. Basically the difference between a standalone GPS unit and an internet connected maps app - at first it doesn’t seem like you need anything new but slowly the world around it changes.

But yea this seems like a great vision generally. I think that we’ll progress past a chat LLM to one that autogenerates UIs - several companies have already demoed this.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: