"When you end up with a bunch of papers showing that genetic algorithms are competitive with your methods, this does not mean that we’ve made an advance in genetic algorithms. It is far more likely that this means that your method is a lousy implementation of random search."
The article seems reasonable and well-argued. But policy gradients are a major cornerstone of reinforcement learning - just about every textbook will dedicate some time to them.
So how can we reconcile that observation with the arguments in the article? Is recht overstating his case or is this a big screw-up in the field in general?
Can anyone who knows about reinforcement learning weigh in?
Ben's blog series culminated in a nice article[1] touring reinforcement learning. He also held a tutorial on the topic at ICML[2]. They might address some of your concerns.
"When you end up with a bunch of papers showing that genetic algorithms are competitive with your methods, this does not mean that we’ve made an advance in genetic algorithms. It is far more likely that this means that your method is a lousy implementation of random search."
http://www.argmin.net/2018/02/20/reinforce/