Thursday, November 1, 2018

Recommender systems, lists, listing top's , datbases




The following 56 pages are in this category, out of 56 total. This list may not reflect recent changes (learn more).



recommender system or a recommendation system (sometimes replacing "system" with a synonym such as platform or engine) is a subclass of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item.[1][2]
Recommender systems are utilized in a variety of areas including movies, music, news, books, research articles, search queries, social tags, and products in general. There are also recommender systems for experts,[3]collaborators,[4] jokes, restaurants, garments, financial services,[5] life insurance, romantic partners (online dating), and Twitter pages

The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information and do not take into account the risk of disturbing the user in specific situation. However, in many applications, such as recommending personalized content, it is also important to consider the risk of upsetting the user so as not to push recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process.

Risk definition[edit]

"The risk in recommender systems is the possibility to disturb or to upset the user which leads to a bad answer of the user".[70]
In response to these challenges, the authors in DRARS, A Dynamic Risk-Aware Recommender System[70] have developed a dynamic risk sensitive recommendation system called DRARS (Dynamic Risk-Aware Recommender System), which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm. They have shown that DRARS improves the Upper Confidence Bound (UCB) policy, the currently available best algorithm, by calculating the most optimal exploration value to maintain a trade-off between exploration and exploitation based on the risk level of the current user's situation. The authors conducted experiments in an industrial context with real data and real users and have shown that taking into account the risk level of users' situations significantly increased the performance of the recommender systems.

Multi-criteria recommender systems[edit]

Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion values, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems.[77] See this chapter[78] for an extended introduction.

Opinion-based recommender systems[edit]

In some cases, users are allowed to leave text review or feedback on the items. These user-generated text are implicit data for the recommender system because they are potentially rich resource of both feature/aspects of the item, and the user’s evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved meta-data of items, because as they also reflect aspects of the item like meta-data, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as user’s rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mininginformation retrieval and sentiment analysis (see also Multimodal sentiment analysis).

Recommender system evaluation[edit]

To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations.[23] User studies are rather small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge, which recommendations are best. In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate. Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.[79] The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers.[80][81][23] For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests.[81][82] A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms.[83]













No comments:

Post a Comment