Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on what conclusions you're trying to draw from that information. What conference a paper was accepted to is a second-order signal of the noteworthiness. It's probably easier for someone versed in the field to just read the paper to determine if it's interesting. If you're using the conference as a quick pass/fail as you skim through the abstracts of hundreds of papers, ok, but you probably wouldn't make time to comment on HN about it in that case.

This paper looks like it builds on pretty well-known techniques like stacked autoencoders, so let's see what first-order noteworthiness data we can gather from a quick skim of the paper. If I had to guess why it wasn't accepted into a better conference:

- It uses stacked autoencoders, which are pretty out of fashion

- It bothers reporting results on MNIST

- (more subjectively) It pulls an unfortunately common technique of saying "here's something the brain does" and then hand-waving that it's a deep reason why a technique they've come up with is useful, when in fact the relationship is just "inspired by the general idea of", not "performs the same function as" the biological mechanism. In this case, I think the tenuous connection of their technique to research on neurogenesis is pretty flimsy. Clearly neurogenesis is not how an adult human brain forms new memories or gains proficiency in new skills (which they acknowledge in the conclusion)



> It's probably easier for someone versed in the field to just read the paper to determine if it's interesting.

> If you're using the conference as a quick pass/fail as you skim through the abstracts of hundreds of papers, ok

You answered your own statement, I think. Most researchers will skip a paper in a second tier conference. In fact, most I know won't read an entire paper - they'll only read some of it and skip stuff.

You're correct that I am not an active researcher (otherwise I would not have time to be commenting). I merely did some research back in college. But honestly that little experience gives me a huge leg up on most HN commenters in understanding research. It's unfortunate that the only reason this paper is #1 on HN is because it has a cool title.

That being said, MNIST is not really a disqualifier. (Unfortunately) MNIST is the most popular dataset referenced in NIPS 2016 papers (https://twitter.com/benhamner/status/805864969065689088). The handwaving is also forgivable; many NIPS papers handwave a lot too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: