A Mathematical Recommendation Model to Rank Reviewers Based on Weighted Score for Online Review System
01 Jan 2021-Vol. 164, pp 317-325
TL;DR: A mathematical model is presented to rank the reviewers by assigning weighted scores based on certain parameters and a pre-processing technique is used before applying mathematical model to filter out the quality reviews.
Abstract: Recommendation systems play a very important role in business from several aspects. New systems, concepts are being evolved to enrich the business from different perspectives. Online reviews provide valuable information about products and services to consumers. Generally, online reviews are done on products to understand the usefulness or popularity of a product. However, it is being found that often fake reviews are given to increase the popularity of own product or to defame competitors’ products. This imposes research challenge to validate the reviews or Trustworthiness of reviewers. A recommendation model in online review system aims to filter the authenticated reviewers and then rank the top reviewers or emphasize on more impactful reviews. In this paper, a mathematical model is presented to rank the reviewers by assigning weighted scores based on certain parameters. A pre-processing technique is used before applying mathematical model to filter out the quality reviews. The pre-processing technique and data analysis are experimented on a real-life dataset to show the effectiveness of the proposed model.
25 May 2022
TL;DR: In this article , a novel methodology is suggested to classify reviewers into different categories based on the preciseness of the reviews, which can utilize the precise reviewers for getting genuine product feedback and encourage this group to promote their products.
Abstract: In the last decade, huge growth is observed in the online marketplace. Consumers are increasingly engaged in online shopping due to its operational flexibility, huge product search space and diversified products. In this virtual marketplace, consumers make the purchase decision not only based on product specifications given by the seller but also considers the product reviews given by the peer customers. On the other hand, inconsistent reviews make the consumers confused and thus have a negative impact on the overall sales of a business entity. Therefore, identifying the reviewers giving precise and consistent opinions about the products is important for business organizations. In this work, a novel methodology is suggested to classify reviewers into different categories based on the preciseness of the reviews. Business entities can utilize the precise reviewers for getting genuine product feedback and encourage this group to promote their products resulting in increased business growth.
••11 Dec 2011
TL;DR: This paper proposes a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed, and explores how interactions between nodes in this graph can reveal the cause of spam.
Abstract: Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.
TL;DR: A strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation is provided.
Abstract: Online reviews are often the primary factor in a customer’s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation.
01 Apr 2017
TL;DR: Results have several implications - firstly, businesses should focus on building a good review-based online reputation; secondly, they should encourage top trustworthy reviewers to review their products and services; and thirdly, trustworthy reviewers could be identified and ranked using reviewer characteristics.
Abstract: Why do top movie reviewers receive invitations to exclusive screenings? Even popular technology bloggers get free new gadgets for reviewing. How much do these reviewers really matter for businesses? While the impact of online reviews on sales of products and services has been well established, not much literature is available on impact of reviewers for businesses. Source credibility theory expounds how a communication's persuasiveness is affected by the perceived credibility of its source. So, perceived trustworthiness of reviewers should influence acceptance of reviews, and consequently should have an indirect impact on sales. Using local business review data from Yelp.com, this paper successfully tests the premise that reviewer trustworthiness positively moderates the impact of review-based online reputation on business patronages. Given the importance of reviewer trustworthiness, the next logical question is how to estimate and predict it, if no direct proxy is available? We propose a theoretical model with several reviewer characteristics (positivity, involvement, experience, reputation, competence, sociability) affecting reviewer trustworthiness, and find all factors to be significant using the robust regression method. Further, using these factors, a predictive classification of reviewers into high and low level of potential trustworthiness is done using logistic regression with nearly 83% accuracy. Our findings have several implications - firstly, businesses should focus on building a good review-based online reputation; secondly, they should encourage top trustworthy reviewers to review their products and services; and thirdly, trustworthy reviewers could be identified and ranked using reviewer characteristics. Trustworthiness of reviewers impacts sales of products they review.Review data from Yelp is analyzed to identify trustworthiness of reviewers.Various characteristics of reviewers that influence trustworthiness are identified.Logistic regression is used to identify highly trustworthy reviewers.
TL;DR: Examination of data from the popular review platform Amazon indicates that review helpfulness is positively related to reviewer profile and review depth but is negatively related to review rating.
Abstract: This article examines review helpfulness as a function of reviewer reputation, review rating, and review depth. In drawing data from the popular review platform Amazon, results indicate that review helpfulness is positively related to reviewer profile and review depth but is negatively related to review rating. Users seem to have a proclivity for reviews contributed by reviewers with a positive track record. They also appreciate reviews with lambasting comments and those with adequate depth. By highlighting its implications for theory and practice, the article concludes with limitations and areas for further research.
TL;DR: Cui et al. as discussed by the authors found that the existing helpfulness ratings for most helpful reviews are inflated and significantly higher than ratings collected from a random population due to online shopper self-selection behavior.
Abstract: Many online reviews have a helpfulness rating, and such ratings are being widely used by online shoppers for shopping research. Researchers also use them as a review quality benchmark. However, there is scant research about the reliability of such ratings. This paper explores the reliability of helpfulness ratings and their resistance to manipulations. We found that the existing helpfulness ratings for most helpful reviews are inflated and significantly higher than ratings we collected from a random population due to online shopper self-selection behavior. We also found existing helpfulness ratings for most helpful favorable reviews have an anchoring effect on subsequent votes, thus could be potentially manipulated to boost sales. In contrast, ratings for most helpful critical reviews have a counter-anchoring effect due to risk aversion, thus could backfire if manipulated. Implications and future research are discussed.Keywords: Online review; Helpfulness; Amazon.com; User generated content; B2C ecommerce1. IntroductionUser-generated online reviews (online reviews hereafter) are becoming an essential component of B2C ecommerce. Online reviews mainly serve two functions in electronic commerce. One is to help online shoppers evaluate products and services before making purchase decisions [Park, Lee et al. 2007]. The other is informant [Clemons, Gao et al. 2006], which allows consumers to become familiar with a product or service even though they do not have an immediate intent to purchase [Chen and Xie 2008]. Online reviews can offer important value to customers [Mudambi and Schuff 2010]. Empirical studies done in the past ten years report that popular reviews have strong influences not only on commodities [Zhu and Zhang 2010] and new products [Cui, Lui et al. 2012] but also on services [Ye, Law et al. 2011].Amazon.com revolutionized many online review features to enhance consumers' shopping experiences. For example, once a new online review has been posted and read by a registered shopper, the shopper can vote on its helpfulness by simply clicking the Yes or No button under the review content. The aggregated number of Yes vote and total votes a review received are then updated and displayed at the top of the review content as an indicator of helpfulness.Based on the aggregated helpfulness votes a review receives, amazon.com could use its proprietary computer algorithms to automatically rank and sort out those most helpful reviews and feature them at the top of the review section. This simple feature seems very helpful when the number of reviews keeps increasing and consumers feel difficult to go through even a small percentage of them. Shoppers could then spend their limited product-research time on the most helpful ones to avoid information overload [Maes 1994]. Gradually, because of the market share and influence of amazon.com, this voting-for-helpfulness feature was not only being adopted by many other online retailers, but also being utilized by many researchers as a de facto review quality standard [Mudambi and Schuff 2010, Ghose and Ipeirotis 2011, Korfiatis, Garcia-Bariocanal et al. 2012].Helpfulness votes are important to help consumers on product research and making purchase decisions. It also serves as a benchmark for academic studies on online review. So understanding their reliability and resistance to manipulation could help us better utilize this feature. Several studies already identified bias in online reviews [Li and Hitt 2008, Kapoor and Piramuthu 2009, Cui, Lui et al. 2012, Purnawirawan, Dens et al. 2012]. The voting for helpfulness of reviews may also suffer from similar or related biases. We need to explore the impact of such bias on the helpfulness rating outcomes.In addition to bias, there are increasing concerns about the review manipulation by interested parties like product manufacturers. Ordinary consumers may not be aware of online review manipulation but such practices were observed by both small businesses owners, like those listed on Yelp. …