The page allows you to toggle LDA topics and off to browse the papers, or (my personal favorite) find a paper you like and sort the other papers according to tf-idf similarity, which tends to reveal exceptionally relevant papers.
Very, very nice job. Your page is a great is example of sane, simple and functional data visualization.
I've been sifting through these papers trying to prioritize by to relevance to my work so that I can get them into mendeley and dig in. You just made it a lot easier. Looks like a pretty good haul this year, for me :-)
It only got one other upvote and never made it to the front page. Meanwhile, another article submitted at almost exactly the same time with far less interesting content but a more provocative headline (on a user getting banned from Uber for API abuse) got ~15 votes, pushing it onto the front page.
I re-submitted this page with a slightly more descriptive (and buzzwordy) headline ("State of the art Machine Learning papers: NIPS 2013"), and it almost immediately ended up on the front page. Since then the headline has been reverted to the page's headline, again slowing the rate at which it's received votes.
Now the game is : Among these tons of (bad/good) applied mathematics , find the maybe existing one paper that will :
- Have a real world application
- Go through the years
This attitude is pretty tedious. In research we can't predict in advance what will have lasting value. That's why it is called research. Same thing in applies in startups.
Tons of papers are published every year for the sake of publication, careers, .... absolutely not for the sake of science.
Most of them do not even contain any significant delta with previous research.
An important activity of the researcher is to sort between interesting papers and garbage, since the selection process of even high level conference is deeply broken.
Just read SIGIR proceedings where every paper beats the previous baseline by 0.X % on datasets that do not represent the real problem, it's just an example among many others.
I said that the motivations behind academics to publish lead to the publishing of tons of papers that while being scientifically correct (at least for top-tier conferences) bring absolutely nothing to the party.
I made a nicer version of this to browse for the ICML 2013 papers, based on a similar thing Andrej Karpathy did for the NIPS 2012 papers: http://benhamner.com/icml2013preview/#/
Don't believe anyone's done this for NIPS 2013 yet.
http://cs.stanford.edu/people/karpathy/nips2013/
The page allows you to toggle LDA topics and off to browse the papers, or (my personal favorite) find a paper you like and sort the other papers according to tf-idf similarity, which tends to reveal exceptionally relevant papers.