Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not seeing the connection. This work is about low-level optimization of matrix multiplication. The repo you linked seems to be about replacing back-propagated gradients with a cheaper estimate. What's the similarity you see between these two?


Correct, I think I mistook it as "use a small neural net to approximate matrix multiplication" instead it seems as "use cheaper replacements of matrix mul without much acc loss".

Wellll that means I can give dni another try :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: