The very strange thing in computer science, which is quite different from other research domains, is that seldom of researchers are working on repeating and validating other's work.
Two days ago, at the Poster session of the final Machine Learning projects, a classmate of us present nothing but a work repeat a state-of-the-art paper, which claims that they can achieve 90% accuracy in gender prediction on blog data. The classmate reported that he tried almost his best using the method proposed by this paper and the best he can get in a well-known dataset is a little bit more than 80%. Considering the difficulty of gender prediction, he did not doubt but was very curious about how the published paper can get more than 90% accuracy.
I talked with him at his poster later. I think most of us have the same concern. He said, if a new researcher come into a specific area and find that previously someone have already get such a high accuracy, he probably won't dig into this problem later.
A lot of interesting work are presented in the final poster session. Wiki documents recommendation system, predicting location based on your friends location on social networks and so on. And our group got an frog which can dance and sing as the best poster prize ^^