How opinions are received by online communities: A case study on Amazon.com helpfulness votes
Venue
Proceedings of the 18th International Conference on World Wide Web, WWW 2009, Madrid, Spain, April 20-24, 2009, pp. 141-150
Publication Year
2009
Authors
Cristian Danescu-Niculescu-Mizil, Gueorgi Kossinets, Jon Kleinberg, Lillian Lee
BibTeX
Abstract
There are many on-line settings in which users publicly express opinions. A number
of these offer mechanisms for other users to evaluate these opinions; a canonical
example is Amazon.com, where reviews come with annotations like ``26 of 32 people
found the following review helpful.'' Opinion evaluation appears in many off-line
settings as well, including market research and political campaigns. Reasoning
about the evaluation of an opinion is fundamentally different from reasoning about
the opinion itself: rather than asking, ``What did Y think of X?'', we are asking,
``What did Z think of Y's opinion of X?'' Here we develop a framework for analyzing
and modeling opinion evaluation, using a large-scale collection of Amazon book
reviews as a dataset. We find that the perceived helpfulness of a review depends
not just on its content but also but also in subtle ways on how the expressed
evaluation relates to other evaluations of the same product. As part of our
approach, we develop novel methods that take advantage of the phenomenon of review
``plagiarism'' to control for the effects of text in opinion evaluation, and we
provide a simple and natural mathematical model consistent with our findings. Our
analysis also allows us to distinguish among the predictions of competing theories
from sociology and social psychology, and to discover unexpected differences in the
collective opinion-evaluation behavior of user populations from different
countries.
