Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

M5 air here with 32gb ram and 10/10 cores. Anyone got some luck with mlx builds on oMLX so far? Not at my machine right now and would love to know if these models already work including tool calling


The latest release v0.3.2 has partial support, generation is supported but not all special tokens are handled. I've done some personal testing to add tool calling and <|channel> thinking support. https://github.com/Yukon/omlx


awesome man, can’t wait! And just now checked it out and indeed 0.3.2 does already work for baseline chatting with mlx versions of Gemma 4 … downloading and comparing different variants right now!


I know that someone got Gemma 4 E4B working with MLX [1] but I don't know much more than that.

1: https://github.com/bolyki01/localllm-gemma4-mlx




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: