Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah I see. But as far as training goes the difference between the two methods ("evolution" and backprop) is a matter of locality, no? The backprop modifies weights loosely based on local gradiet towards fitness, and evolution goes in sparse random directions. In this view backprop is indeed vulnerable to local maxima if your optimization method isn't very good, but isn't it just a matter of choosing good optimization methods? In other words, combining local backprop optimization with global evolutionary methods should be the role of robust optimization algos, no?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: