Recommendation system for travellers based on tripadvisor.com data

Exploring the unique benefits of using machine learning models based on hidden factors. Recommended system approaches. An overview of popular travel advisory approaches. Performance metrics. Matrix factorization models. K-models of nearest neighbors.

Рубрика Менеджмент и трудовые отношения
Вид дипломная работа
Язык английский
Дата добавления 14.07.2020
Размер файла 1,2 M

Отправить свою хорошую работу в базу знаний просто. Используйте форму, расположенную ниже

Студенты, аспиранты, молодые ученые, использующие базу знаний в своей учебе и работе, будут вам очень благодарны.

Размещено на http://www.allbest.ru/

Размещено на http://www.allbest.ru/

Recommendation System for Travellers Based on TripAdvisor.com Data

Denisova Nadezhda

Introduction

Historical background. Ever since the mid-1970s, when the concept of a recommender (or recommendation) system was given its first definition in scientific literature, the respective research field has been quickly taking shape as a combination of several closely-related academic disciplines, such as: applied mathematics, computer science, behavioural economics theory and, not the least important, business studies, which have been key in developing the practical solutions and applying the technical research findings in order to eventually yield commercial value (Sharma & Singh, 2016). The first actual implementation of a recommender system in a business environment was achieved at the Xerox Corporation's Palo Alto Research Centre in 1992. According to the reports, this novel recommender system called Tapestry was able to much more efficiently manage a user's stream of incoming electronic mail documents, news wire stories and articles, based on which documents other users had already marked as interesting (Goldberg, Nichols, Oki, & Terry, 1992). As a result, over the next few years the realisation of the enormous potential of the recommender system research for commercial usage has quickly attracted many more investments from large business corporations looking to raise efficiency of internal operations and communication, which enabled the evermore accelerated research developments to take place. Thus, for the widely recognised reason of being able to massively enhance the online experience of consumers, nowadays recommender systems play a crucial part in the daily functioning of the biggest internet platforms, such as Netflix, Amazon or TripAdvisor, by presenting users with personalised suggestions of movies for the evening, relevant shopping items or much sought-after travel destinations (Sharma & Singh, 2016). Along with the continuous increase in computational power and the developments of new machine learning algorithms as well as the digitalisation of many traditional business models, novel ways of improving recommendations are still being actively researched.

Introduction of key terms. The notion of a recommender (or recommendation) system in its general meaning can be defined as a cutting-edge software tool that takes user data as input and produces suggestions of items which are most likely to be of interest to that particular user (Adomavicius & Tuzhilin, 2005). More specifically, a recommender system is, in essence, an information filtering algorithm that often functions on the basis of several either statistical or machine learning predictive models and produces personalised item recommendations for users, by predicting how likely a given item will be preferred by a given user based on the various kinds of available data about the online behaviour of users and the descriptions of items. Moreover, the ultimate products of any recommender system are the personalised item recommendations that are commonly arranged in the form of a limited list of items, which are ranked according to their relevance to a given user and which are very often provided as an output in an online service application that the user interacts with through his or her web browser (Adomavicius & Tuzhilin, 2005).

Research gap and questions. Firstly, the most potent and robust recommender systems, which are the ones exploiting hybrid filtering, have been frequently developed on the basis of the most accessible and the most abundant online data of film ratings, product reviews and e-documents' texts. Thus, the current study attempts to exploit the less convenient data of user ratings and reviews from TripAdvisor, that way contributing to the much less popular application domain of hybrid recommender system research, namely - the tourism domain. Secondly, as the user-item matrix data of travel reviews is expected to be highly sparse, the present research chooses to adapt the specific algorithms of the latent factor models, which have been widely celebrated for being able to efficiently reduce the dimensionality of sparse data, producing the most accurate results among the other computationally comparable machine learning algorithms. Thirdly, the formidable variety of the matrix factorisation algorithms, although proven highly efficient on the cases of extremely sparse data from other application domains of item recommendations, such as film recommendations, have not yet been tested on the similarly sparse TripAdvisor data of user ratings for the tourist landmarks and experiences. Fourthly, since most studies of recommender systems based on the TripAdvisor data have been heavily reliant on the multi-criteria ratings of hotels and restaurants, significantly fewer research papers have been published exploring how the semantic analysis of textual reviews can be adapted for the purposes of improving the robustness of a hybrid recommendation system in the tourism domain, where such user reviews are copious. Thus, the central aim of this research is to construct a hybrid filtering recommender system for travellers, based on the latent factor models and exploiting the TripAdvisor data for tourist attractions from the single most popular tourist destination in Europe - the city of London, UK, by means of answering the following main and supplementary research questions:

- What is the highest performing collaborative filtering algorithm among the matrix factorisation models that is able to produce the most accurate recommendations, based on the TripAdvisor data of user ratings for the London's tourist attractions?

- Does the text recognition technique of the latent Dirichlet allocation reveal coherent categories of user preferences to be employed for the content-based filtering of new users, based on the TripAdvisor data of user reviews for the London's tourist attractions?

Central goal of the research. The present research thesis is directed towards adopting the highly efficient and widespread latent factor models from the prior recommender system research studies based on the data of film and e-documents domains to be applied on the previously untested TripAdvisor data of tourist attractions, with the ultimate goal of combining the results of the latent factor models into a hybrid filtering recommender system for personalised recommendations of tourist attractions.

Purpose of the research. This research aims to incorporate aspects of descriptive and exploratory research purpose types. The proposed research sees it necessary to implement the descriptive purpose aspect at the preliminary stage, in relation to the collection and descriptive analysis of the sampled user data, as well as the exploratory purpose aspect as the core focus of the research, in terms of identifying the novel configuration of a hybrid recommendation system for travellers as well as of training the machine learning models in the subsequent trial-and-error process accompanied by a series of adjustments toward the most accurate versions of the chosen prediction algorithms.

Objectives of the research thesis are identified as the following:

a) First and foremost, the scientific contribution to the extensive research area of hybrid recommender systems for travellers, which consists in demonstrating the unique application benefits of latent factor models on the previously untested tourist attractions data from the TripAdvisor web platform.

b) And secondly, the development and the eventual launch of a fully functional consumer prototype of a hybrid recommender system that can be tested by real online users, which may later further developed into a standalone online service application providing personalised recommendations on tourist attractions, which can be potentially adapted to be implemented into a larger recommendation system architecture of online platforms that provide travel-related services, such as the TripAdvisor platform.

Tasks that are set for the completion of the present research objectives are as follows:

a) Firstly, to identify the most commonly used types of TripAdvisor data from the range of research publications on the subject of travel recommender systems, assembled from the existing trustworthy research papers that have been included into the scientific citation databases of the Web of Science and Scopus platforms, in order to justify the choice of adapting the previously untested TripAdvisor data of tourist attractions for the goals of the present research paper;

b) Secondly, to web-scrape the user ratings and reviews data from the TripAdvisor platform specifically on the landmarks and experiences of London, UK, with the primary help of the Python parser libraries called Selenium and BeautifulSoup, as well as to describe and visualise the results of the respective data analysis, mainly in order to assess the degree of the data sparsity problem and make appropriate adjustments;

c) Thirdly, to train the previously reviewed range of the most popular and computationally accessible matrix factorisation algorithms of the FunkSVD, the SVD++ and the NMF as well as the common benchmark methods of user - and item-based k-nearest neighbours on the basis of the user-item interaction matrix that contains overall ratings for tourist attractions, in order to evaluate and compare the quality of their rating predictions on the unknown test users, based on the commonly used accuracy metric of MAE, that way justifying the ultimate choice of a single collaborative filtering algorithm that will be able to produce the most accurate recommendations for the system's new users;

d) Fourthly, to test whether the application of the topic recognition algorithm of the Latent Dirichlet Allocation on the array of the users' textual reviews for the collected sample of London's tourist attractions reveals a more general, but more relevant list of attraction categories that is consistent with the interests of London travellers, as compared to the default list of 15 attraction types provided on the TripAdvisor platform;

e) Fifthly, to enhance the travel recommender system's performance by executing the following tasks: solving the cold start problem for the actual new users by pre-filtering them according to the newly discovered list of attraction types; as well as ensuring the diversity of user recommendations by penalising the recommendation algorithm for suggesting the most popular attractions;

f) And finally, to develop a fully-fledged practical solution for the recommendations of London's tourist attractions in the form of an interactive web-based service application, on the core basis of the single top-performing collaborative algorithm, which showed the highest accuracy of rating predictions according to the MAE metric, as well as on the supplementary basis of the content-based filtering of attractions according to the user's preference of a certain one of the attraction categories, revealed by the LDA algorithm.

Structure of the thesis. The present research paper consists of the total number of 57 pages (without appendices) and comprises a reference list of exactly 74 citations of the highly referenced foreign research articles and conference reports, with the majority of them having been taken from the trustworthy databases of scientific research included on the Web of Science and Scopus platforms. The brief structural outline of the present research paper has been compiled of the following elements:

a) First and foremost, the academic literature review for the purposes of identifying the theoretical foundations and the scientific field contribution of the prior research as well as revealing the specific gap that the present research intends to contribute in filling;

b) Secondly, the statement of research questions and respective hypotheses and the justification of the choice of approaches by which the present thesis aims to arrive at the satisfying answers;

c) Thirdly, the methodology of the research which specifies and argues for the appropriability of the exact methods used at the data mining, data analysis and the machine learning stages;

d) Fourthly, the chronological record of the intermediate machine learning model outcomes and the respective justified adjustments as well as the description of the ultimate research results;

e) And finally, the concluding part of the paper that focuses on critically discussing the applicability and quality of the achieved results as well as the potential research possibilities of exploring further on in the set direction.

Professional relevance and significance of the research object. Since recommender systems are currently embedded into the majority of travel related online platforms, improvements in the performance of their algorithms can provide more accurate and diverse user recommendations and, thus, greatly benefit every party involved from consumers and companies that host these online platforms to the travel businesses that advertise and try to improve their product and service offers on such platforms. Moreover, since tourists can often feel incompetent to exploit the advanced search options or even completely lost when inundated with the abundance of information present online, the online service provider that solves the problem of information overload and recommends only the most relevant options of items is surely bound to drastically improve the engagement of its users, that way capturing and retaining much bigger numbers of active users, which inevitably indicates leveraging higher profits no matter whether the platform earns from direct item sales, like Amazon, from user subscriptions, like Netflix or from advertising of third party products, like YouTube. However, even though this research is focused on data in the travel domain, the recommender algorithm it aims to arrive at can potentially be used in other areas where user feedback is provided in the form of textual reviews and ratings.

Scope of the thesis. First and foremost, in order to make the collaborative recommender system algorithm more robust and avoid extreme data sparsity, the present research thesis employs the purposeful method of data sampling and ensures the collection of a representative data sample, by means of scraping all of the user ratings and reviews that have been posted in the course of the past year from February 2019 to January 2020 under every single one of approximately one thousand tourist attractions. Such sampling approach is the most convenient as the default sorting of TripAdvisor reviews is set according to their recency (and not popularity or rating level); as well as completely sufficient as it provides a representative sample of the most recent reviews of the platform's active users, devoid of any season - or demographic-specific data. Furthermore, in sight of the extremely demanding computational conditions of processing millions of reviews, it is again reasonable to narrow down the focus to the single most popular city destination of Europe, such as London, UK, and scrape the data on its tourist attractions to create a dataset of the total magnitude of tens of thousands of user ratings and reviews.

Limitations of the research. The core limitation of any scientific research that employs machine learning methods is the computational complexity of the algorithms. Even though prediction algorithms that show accurate results might exist in theory, their convenient implementation is impossible due to the inadequacy of practical application cases in the computer science research as well as, much more importantly, due to the temporal and computational constraints of the present bachelor's thesis. Secondly, since the data on user ratings will be narrowed down to the travel destinations of only a single city, the findings that will be uncovered as the result of applying the text recognition algorithm of the LDA on the user reviews for the London's attractions will most probably turn out location-specific and, hence, not be applicable to other destinations. Finally, the limited scale and research framework of this bachelor's thesis recognizably limits the practical applicability of the results, by only allowing to run and test models on the collected data sample, for which reason the launch of the recommender system online for actual users to try out might yield different accuracy results.

1. Theoretical foundation

1.1 Definition of main terms and concepts

The general meaning of the term «recommendation» in the present context of the recommender system research can be defined as a prediction of a given person's preferences for a specific range of items, based on the preference history of that same user or of other users (Adomavicius & Tuzhilin, 2005). Furthermore, the most basic definition of the term «algorithm» can be formulated as a specific set of instructions that is often expressed in a computer (or programming) language to be executed by a certain computer programme. Thus, the concept of a recommender (or recommendation) system, which is central to this research thesis, can be, in essence, defined as an algorithm that often incorporates statistical as well as machine learning models in order to provide users with a recommendation list of items, which have been predicted to be the most likely to match the interests and other characteristics of these users (Lu et al., 2012).

One of the first ever descriptions of the mechanism that underlies modern recommender systems in academic literature dates back to the year of 1979. Rich (1979), in her paper on the modelling of «user stereotypes», proposed her own version of a recommender system called Grundy, a virtual librarian that probed users with questions about their personalities and interests, that way associating users with various «stereotypes», or collections of frequently encountered characteristics of particular types of users, and, thus, being able to recommend new books to users. In the case when the user rejected the recommendation, the system would start clarifying which of the stereotypes it based its recommendation on were unprecise about the user and, after the appropriate corrections were applied, once again attempted to make further recommendations. This cycle continued to repeat for as long as the user requested the system to further adjust the evaluations of his interests and to try recommending other books that they would be interested in reading (Rich, 1979).

Similarly, the key concept at the root of many modern recommendation systems is identified as: firstly, predicting the item ratings that users would very likely give to the yet unrated items; and then, presenting those users with tailored recommendation sets of the top relevant items, which they would be very likely to appreciate the most. Hence, the general task of any recommender system can be described as the creation of a predictive algorithm that separately or both at once considers the attributes of an item space as well as the attributes of a user space (those spaces being also known as profiles) in order to evaluate how well a specific item matches a particular user. User profiles are usually comprised of user IDs, users' demographic information and a variety of ratings the user has assigned to different items, whereas the profile characteristics of items vary drastically, depending on the domain that the particular recommender system is going to be implemented in, for example, restaurants can be characterised by their location, average bill, overall rating, dominant type of cuisine, availability of al fresco dining and so on (Adomavicius & Tuzhilin, 2005).

Furthermore, the way the most primitive recommender algorithm determines that certain items are worth being recommended to a given user is directly connected to the concept of similarity. However, the core problem of the similarity-based recommendation systems is the choice of how exactly to define and to measure the similarity between the profiles of items or between those of users. In the instance when item ratings of users are explicitly available, a given pair of users can be identified as being similar if they tend to assign similar ratings to the items that both of them have rated. In this case user similarity can be measured using a correlation metric, such as the Pearson correlation coefficient. On the other hand, in the situation when there is no rating information available, a pair of users can be considered as being similar when they both viewed, liked or bought many of the same items. Such information can be either inferred from the structural properties of the users' input data or gathered implicitly by tracking the online activity of those users as they visit the observed web sites. In addition, other external information such as the meta data of users' attributes, social network tags and items' content can be utilised to enhance the quality of a similarity estimate between given pairs of users or pairs of items (Lu et al., 2012).

Lastly, another basic concept that is crucial to understanding the general principles by which any recommender system operates is user interface, or UI for short. The widespread computer science term of user interface denotes the way in which a user is able to interact with any given computer system with that system also being able to communicate back, in particular, by means of input devices and software applications (Lu et al., 2012). Although one of the very first practical implementations of a recommender system - a virtual librarian named Grundy, which was described previously - had an almost completely transparent user interface, revealing to a user all of its assumptions about them as well as the whole of its further decision process of choosing the right book to recommend, nowadays hardly any recommender system is made to be transparent (Rich, 1979). Such a black box approach in regard to user interface, which virtually all modern recommendation systems have evolved to adopt, is mainly dictated by the sheer complexity of the decision-making process of today's systems, which very rarely lends itself to any kind of user interpretation. However, the transparency of user interface was also reduced in order to conceal any possible traces of illegal user data manipulations as well as any unethical assumptions made about a given user, due to which facts users have actually been enjoying the recommendations slightly less ever since (Sinha & Swearingen, 2002).

1.2 Recommender system approaches: core algorithms and respective applications

In a classical fashion, the majority of recommender systems can be separated into the three categories based on their approach towards matching of the new user's input data to an existing database in order to produce relevant recommendations:

a) Content-based filtering approach that is based on the varying degrees of similarity between different items;

b) Collaborative filtering approach that is frequently based on the so-called «neighbourhoods» of similar users;

c) Hybrid filtering approach which aspires to combine in the most balanced and efficient way both of the above approaches (Lu et al., 2015).

To start with, the way a content-based recommender system functions is: firstly, by calculating the similarity between all of the items in a database, for instance, movies, news articles or travel destinations; and then, as a given user updates the system with their item ratings, present that user with a recommendation of items, which were previously measured to be similar to the ones that user has rated the highest. Thus, with content-based filtering the ultimate user recommendation depends exclusively on that single user's profile as well as obviously on the database of item profiles. The origins of the content-based filtering recommenders can be traced back to the first inklings of research in the areas of information retrieval and information filtering techniques, which became to be most extensively applied in the field of Natural Language Processing. For this reason the content-based filtering approach is commonly employed by the recommendation algorithms that are focused on measuring the similarity between chunks of textual data from item contents. Thus, item profiles are often described in terms of keywords, and the similarity between them is calculated via the special measure of term frequency and inverse document frequency metric, commonly abbreviated as TF-IDF, which can be produced in the following way:

where: - the total number of the keyword's appearances in the document ;

- the most frequently encountered keyword in the document;

- the total number of documents;

- the total number of the documents within which the keyword is present (Lops et al., 2011).

In simple terms, the goal behind this basic metric is to identify keywords in a document that are the most relevant for the document's topic, by means of recording the keywords that occur most frequently within that specific document (term frequency) and that are, at the same time, among the least frequently occurring in the general corpus of all other documents (inverse document frequency). This way, the weight of each document is computed as the product of the TF and IDF values (Adomavicius & Tuzhilin, 2005). As a result, based on the weights of documents, the degree of closeness between every single document in the overall corpus can be easily identified either with the help of various distance metrics, for instance the Euclidean distance or the Manhattan distance, or similarity measures, by far the most common of which is the cosine vector similarity (Lops et al., 2011).

However, the overwhelming majority of modern recommender systems do not exclusively rely on such obsolete heuristic methods as basic similarity measures, instead they employ machine learning algorithms that create predictive models by training them on the input data of users. Thus, content-based recommenders are generally divided into the two major subgroups of classifier-based and nearest neighbour-based models (Portugal et al., 2018). The former ones probabilistically assess all the items that have not been rated by a user according to the binary categories of either «recommend» or «do not recommend» based on the user profiles of item ratings, for example, a wide variety of the decision tree models and Bayesian network methods; while the latter models separate all of the items into clusters based on how similar they are, in order to recommend those items that share the same cluster with the ones a given user has rated the highest, for instance, applying the k nearest neighbour classifier technique to the cluster centres produced in the result of the k-means clustering method (Lops et al., 2011).

One of the main problems inherent in the content-based recommendation systems is the extreme homogeneity of recommended items as well as the narrowness of the overall recommendation range. Users are continuously presented with items too similar to those they liked while being completely unable to discover brand new items, which have slightly different profiles of characteristics but may still be of much interest to them. Moreover, the goal of making highly accurate recommendations of items that are too much alike appears to be a much worse method, then trying to diversify the range of recommended items, for instance, suggesting a myriad of similar news articles that, in fact, cover exactly the same event (Adomavicius & Tuzhilin, 2005).

Speaking of the collaborative filtering recommender systems, the key task of such an approach is to predict the relevance (or the rating scores) of items for a particular user based on the similar profiles of other users present in the database, that way quite literally making all the users collaborate in producing a good recommendation for any single user. The respective family of recommendation methods are commonly further separated into the following two major classes: the memory-based and the model-based ones. The simple core difference between these two types of collaborative algorithms is that the memory-based ones make calculations over and store in computer memory the entire user database in order to produce statistical predictions of item ratings for the new users, while the model-based ones employ various machine learning models in order to, firstly, train them on an available compiled dataset and, then, use them to make predictions for new users (Breese et al., 1998). Each of the two types of collaborative filtering algorithms is associated with its major respective group of specific methods which are, namely, the user neighbourhood methods and the latent factor models respectively. Firstly, the former are focused on measuring similarity between either users or items. However, even though the neighbourhood methods might exploit the attributes of items, unlike the content-based algorithms they make a recommendation based not solely on item similarity, but rather on the similarity between users, that way being able to identify like-minded individuals and recommend to each of them those items which they do not share both, but only one of them has rated. Latent factor models, in their own turn, infer the presence of implicit descriptive factors that can be revealed from certain arbitrary, yet computer discernible patterns in the user-item data (Koren, Bell & Volinsky, 2009).

Although the collaborative filtering approach can be applied by means of many of the same most basic classification and regression methods that were mentioned previously for the content-based approach, such as the various distance, correlation and similarity measures, the probabilistic classifiers (decision tree, random forest and Bayesian models), regression models (linear, polynomial or logistic) and clustering methods, the present paper is going to focus on one of the most promising types of machine learning models for collaborative filtering called the latent factor models, for the main reason that these types of algorithms have proven to deal with huge and sparse datasets more efficiently than the majority of other standalone techniques (Koren et al., 2009).

Latent factor models have been most extensively studied in the research domains of: the Natural Language Processing techniques, with the specific examples including, but not limited to the probabilistic Latent Semantic Analysis (pLSA) being used for information retrieval (Hofmann, 2004) and the Latent Dirichlet Allocation (LDA) employed in text classification (Blei et al., 2003); as well as in the context of the major recommender system algorithms and dimensionality reduction techniques, some examples of which are the matrix factorisation-like methods of singular value decomposition (SVD) being applied for the dimensionality reduction of a user-item ratings matrix (Billsus & Pazzani, 1998).

However, it can be easily argued that one of the most widely known of the latent factor models for collaborative filtering are the matrix factorisation models. These methods have dramatically gained in popularity as a result of the Netflix Prize open competition in the year of 2009, when an SVD algorithm, which later became widely known after its creator as the FunkSVD, was specifically tuned and applied by Simon Funk to produce very surprising improvements in the accuracy of movie recommendations (Piatetsky-Shapiro, 2007). The basic methodology behind the matrix factorisation techniques is based on, firstly, representing both items and users as vectors of latent factors and, then, recommending to users only those items that produce the highest dot product between those factor vectors. Due to the fact that those matrices of user-item interactions often suffered from the data sparsity problem and, as a result, most machine learning models tended to drastically overfit, for the matrix factorisation a regularisation parameter could be naturally implemented into the process of minimising the following squared prediction error, in order to penalise for the extremely high factor vector norms for items and users:

where: - the actual rating of the user u for the item i;

- the unknown factor vector for each item i that is being predicted;

- the unknown factor vector for each user u that is being predicted;

- the dot product, i.e. the predicted rating of the user u for the item i;

- denotes the norm;

- represents the set of user-item pairs where ratings are known;

- the regularisation term (Koren et al., 2009).

There exists a wide range of variations of the matrix factorisation methods that continue to still be experimented with in the domain of recommender system research. The selected range of the three of the most accurate and frequently cited singular matrix factorisation methods, which operate on the basis of noncategorical variables and which without exceptions have all been almost exclusively tested on the most widespread and abundant content of film databases, has been assembled to be the following: the SVD, the SVD++ as well as the NMF algorithms, with every one of them being optimized by means of the regularized SGD learning method (Cacheda el al., 2011; Koren et al., 2009).

First and foremost, the well-established matrix factorisation-like model called the Singular Value Decomposition (SVD), which has proved to be the most accessible out of the range of factorisation methods and at the same time highly efficient with sparse data among other collaborative filtering methods (Cacheda et al., 2011; Gorell, 2016). The basic version of SVD does not deal too well with the data sparsity problem of a user-item matrix, and, when still applied to the very meagre amount of known values, it understandably tends to overfit and fails to make sound predictions for the new users. Initially, most of the research efforts were directed to adapting various data imputation techniques as a way to increase the density of interaction matrices. However, this approach most of the time proves to be extremely costly as well as too complex and case specific (Ranjbar et al., 2015). For this reason a more general and straightforward technique was proposed to yield the double benefits of, firstly, the implementation of the abovementioned regularisation term and, secondly, of including additional bias parameters for each user and each item, which are actually meant to account for the larger part of variations in ratings that are caused by the independent effects associated with particular users and items (Koren et al., 2009; Paterek, 2007). Thus, the ratings prediction formula is modified to include the biases and set to be calculated according to the following four components:

;

where: - the global average rating across all known data points;

- the user bias reflecting the general tendencies of how the user u rates items;

- the item bias reflecting the general tendencies of how the item i is rated by users;

- the dot product of factor vectors accounting for the user-item interaction (Koren

et al., 2009).

As a side note, in the cases of considering the addition of a new user or a new item, all of the associated biases and factors are, at first, assumed to be zero. As a result, the problem of minimising the squared error turns to be the following:

where: - the actual rating of the user u for the item i;

- the predicted rating of the user u for the item i;

- the user bias reflecting the general tendencies of how the user u rates items;

- the item bias reflecting the general tendencies of how the item i is rated by users;

- the unknown factor vector for each item i that is being predicted;

- the unknown factor vector for each user u that is being predicted;

- denotes the norm;

- represents the set of actual ratings in the training dataset;

- the regularisation term (Koren et al., 2009).

Furthermore, the unique contribution of the already mentioned Simon Funk's SVD (FunkSVD) algorithm was that it employed the regularisation term as well as a learning algorithm of Stochastic Gradient Descent (SGD), which became widely known for the accessibility of its implementation on the large-scale datasets, especially the ones that are heavily afflicted by the data sparsity problem, as well as for its fairly short running time of the core computations and the subsequent evaluation process, which is due to the fact that SGD stores in memory only the most recently computed rating prediction errors (Lin, 2007). Without going into too much application details, the SGD optimisation algorithm is employed to circle through all of the available data points and make predictions of user ratings, each time readjusting the user and item factor vectors based on the rating predictions' error, until eventually arriving at the optimal pairs of user and item biases as well as of user and item factor vectors, as can be specified in the following mathematical fashion:

;

;

;

;

;

where: - the actual rating of the user u for the item i;

- the predicted rating of the user u for the item i;

- the unknown factor vector for each item i that is being predicted;

- the unknown factor vector for each user u that is being predicted;

- the rating prediction error of the user u for the item i;

- the regularisation term;

- the learning rate (Koren et al., 2009).

Alternatively, soon after the Netflix Prize open competition, where the FunkSVD attracted much attention to the SGD learning algorithm, another optimisation algorithm was popularised by the name of Alternating Least Squares (ALS), which, in essence, is performed by fixing all the factor vectors for users, in order to compute the ones for items, and then switching around to predict those for users through minimising the sum of the squared residuals (Rendle, 2012). The two common use cases for this approach have been observed to be the following: firstly, it can be very efficiently used in the scenario of when the recommender's system is computationally capable enough to sustain the parallelisation of its predictive algorithm's execution; and also, especially in the case of the recommender algorithm being trained on the more abundant implicitly collected data of users, as it would be impractical to apply ALS on the too sparse explicit data (Hu et al., 2008).

Second of all, the enhanced version of the FunkSVD algorithm called the SVD++ was also proposed in order to take into account the implicit ratings of users as well, which can be defined as the fact that a given user has rated a certain item, regardless of what that actual rating value was. Due to the incorporation of implicit ratings, the SVD++ method, when applied on the regularised squared error with the SGD optimisation, often shows an improvement in the accuracy of rating predictions over the FunkSVD algorithm, albeit a small one (Cacheda et al., 2011). As for the calculation of the predicted rating, the new set of factors reflecting the implicit ratings are incorporated as follows:

;

where: - the global average rating across all known data points;

- the user bias reflecting the general tendencies of how the user u rates items;

- the item bias reflecting the general tendencies of how the item i is rated by users;

- the unknown factor vector for each item i that is being predicted;

- the unknown factor vector for each user u that is being predicted;

- the total number of implicit ratings of user u;

- the new set of factor terms that captures implicit ratings (Koren, 2008).

Last, but not least of the three selected matrix factorisation algorithms, the Non-Negative Matrix Factorisation (NMF) algorithm simply introduces a non-negativity constraint to the basic SVD algorithm for both the user and item parameters, which are being calibrated as the training model continues to learn, for the main reason of ensure that the predictions of a data feature values are a more appropriate representation of a user's interests. In order to actualise this constraint the algorithm basically rescales the learning rate to exclude all of the negative components from predicted factor vectors of users and items, that way leaving only the non-negative ones to compute the next iteration of a predicted rating and of its respective error (Luo et al., 2014). Thus, the stages of the SGD learning algorithm are slightly modified in the following way to calculate only the non-negative user and item factors of and respectively, requiring the starting factor values to be positive as well:

;

;

where: - the factor f of the vector which is predicted for each user u;

- the factor f of the vector which is predicted for each item i;

- the new regularisation parameter for user u;

- the new regularisation parameter for item i (Luo et al., 2014).

Hybrid recommendation systems, as the name suggests, combine the two approaches described before to minimise the drawbacks of each one taken individually. Adomavicius and Tuzhilin (2005) identify four types of hybrid systems. The first type is creating one content-based and one collaborative filtering recommender and combining their results either via voting or linear combination of their outputs. The second type combines collaborative filtering with some content-based system features. Some of the examples include assigning a content-based profile to each user and then finding similar user profiles instead of similar sets of rated items. The third method incorporates some content-based model characteristics into collaborative filtering. One of the approaches described is similar to matrix factorisation, but instead of reducing dimensionality of a simple user-item rating matrix it utilises user content profiles as described in the second approach. Finally, there are systems that combine both content-based and collaborative filtering approach in equal proportion (Adomavicius & Tuzhilin, 2005).

As Portugal et al. (2018) showcase in their systematic review of the various recommender system methodologies, in the course of the past ten years the collaborative filtering (66 papers) algorithms have gained more attention from academics than the content-based (45 papers) and hybrid (18 papers) ones. Moreover, neighbourhood and model-based collaborative filtering approaches have each of them generated vastly more attention as compared to the years before the 2012; while the hybrid recommender methodology remained to be the least researched out of the three classical approaches, which is, for the large part, due to the fact that it does not have any singular filtering methods directly associated with it, but rather requires a more consuming effort of assembling a whole recommender system out of any combination of the different types of filtering methods.

Burke (2002) was the first one to propose a solid classification of such ways of combining the contributions made by each of the two recommender approaches, otherwise known as hybridisation methods. The first and consequently the most popular one of these methods is the weighted one, which weighs the employed recommender algorithms according to their predictive accuracy, by means of assigning them with a set of votes to denote the extent of their contribution to the ultimate user recommendation. Furthermore, the other two methods that usually act as alternatives to the weighted one and that, unlike the weighted method, can be appropriately employed not only when combining techniques of the same relative value are called switching and mixed hybridisation methods; with the first one consisting in switching between recommender techniques depending on the present-moment system tasks and the latter one being applied whenever a side-by-side recommendation output from both techniques is more favourable. Furthermore, feature combination hybrid is a method of implementing both collaborative and content-based features for the purposes of producing a single recommendation. Quite unlike feature combination, feature augmentation does not converge the different techniques, but sequences them in a way that the recommendation output of one technique acts as feature input to the other. Similarly, the cascade method unravels in a staged process, however, the one in which recommendation output from the first technique is used only for further refinement by the next one. Lastly, an even more unique case of creating a confluence of the two different filtering approaches has been proposed as the meta-level method, which is similar to the feature augmentation, however the next technique in a sequence takes as its input not the previous one's feature, but instead incorporates the previous recommender technique as a whole; for instance, several content-based models are employed to predict restaurant preferences for every separate user, and then these basic models, which, in essence, are just vectors of estimates, are fed into a collaborative filtering model to make predictions by comparing across users (Burke, 2002).

Due to the reason that the TripAdvisor platform incorporates features of a social network to enhance the quality of its travel recommendations and to promote independent communities of active users, it is necessary to point out the specifics of recommender systems that operate on the basis of social trust networks. First of all, the term social trust can be defined as a connection between a pair of users based on either explicit feedback (subscription, voting, commenting) or implicit feedback (frequency of interactions: page visits, message exchanges) (Yang et al., 2014). In parallel to the two types of feedback, there is a respective distinction between two types of alternative data sources that are assumed to enhance recommendation quality: rich side information on items and users and interaction-associated information. The former is mainly concerned with recording social trust relationships between users by exploring a subscription-based social network or by analysing various user-contributed data, like numerical ratings and textual reviews, photo and video material as well as social tags and geotags; while the latter most commonly covers information on time, location and user mood status recorded at particular instances of user-item interactions (Shi et al., 2014).

Moreover, there are again two general classes of social recommendation approaches: matrix factorisation based one, which combines user-user social trust data with user-item feedback history, and nearest neighbour based one, which first traverses the network of users' direct and indirect friends to provide an additional advantage of social neighbourhood. When it comes to specific models that are commonly trained on such diverse and abundant data, previous research has conveniently identified what state-of-the-art algorithms had been used to implement side information in memory-based (cosine vector similarity, k-nearest neighbours), model-based (Bayesian network model, matrix factorisation model) and graph-based (random walk) collaborative filtering approaches; as well as to consider the interaction-based information by means of tensor factorisation, factorisation machines and graph-based approaches (Shi et al., 2014). It is not surprising that when it comes to efficiently storing in memory and manipulating huge arrays of data the matrix factorisation algorithms prove to be superior and, in particular, manage to excel at both item rating prediction and item list recommendation tasks (Yang et al. 2014).

Not least important is to point out the primary challenges that are associated with the current social recommender system tasks: firstly, trust - and distrust-based social recommendation of potential friends, products and other content; secondly, group recommendation for multiple people looking to choose a single activity, destination, etc; and thirdly, long tail recommendation, which refers to recommending items with low popularity - crucial for an effective recommender system (Shi et al., 2014).

Finally, Pantano, Priporas and Stylos (2016) emphasise that, in the process of making the choice of a specific recommendation system algorithm for the computing of item rating predictions, the researchers should take into account the specifics of the domain's informational context (what the available data sources are; how the information on users and items is organised there), the type of data that was made available in this particular domain (if numerical, strings, mixed, etc.), as well as the maximum bearable mark of the computational costs (incorporating the speed of a programme execution).

1.3 Overview of popular travel recommender system approaches

The central aim of the majority of tourism-related recommendation systems is to provide users with suggestions of relevant travel destinations and tourist attractions that are otherwise commonly known as the Points of Interest (PoIs). Although there exist other more sophisticated systems that are offering users travel routes recommendations and even personalised trip planning services, in line with the objectives of the present research paper, only the algorithms that specialise in recommending PoIs are going to be considered (Gavalas et al., 2014).

The broad variety of popular recommender system frameworks, which are employed for the application in the travel domain, can be most conveniently represented as the following categories according to both the type of approach that they employ and the type of data that they are based on (Roopesh & Tulasi, 2018):

a) Context aware systems that rely on constantly gathering contextual information from a person's device, web browser or social media that way enabling a live update of the user's data on his current location, time and day of the week, current season and weather conditions, etc. The combination of such diverse data of a person's context provide the basis for presenting the user with, for instance: the recommendations for a number of tourist attractions, based on their working hours, shortest user travel paths and sentiment scores from social media (Meehan et al., 2013); or, based on the current weather data and the person's travel history in the form of geographically tagged photos, the recommendations of a range of similar-looking attractions in a different city that were shared by other users on the photo sharing web sites (Xu, 2014). The core difficulties with context-aware systems are the high intensity of ongoing per user computations as well as the high complexity of the server-side software architecture enabling the continuous, repeated parsing of massive volumes of online data;


Подобные документы

  • Impact of globalization on the way organizations conduct their businesses overseas, in the light of increased outsourcing. The strategies adopted by General Electric. Offshore Outsourcing Business Models. Factors for affect the success of the outsourcing.

    реферат [32,3 K], добавлен 13.10.2011

  • Company’s representative of small business. Development a project management system in the small business, considering its specifics and promoting its development. Specifics of project management. Problems and structure of the enterprises of business.

    реферат [120,6 K], добавлен 14.02.2016

  • Different nations negotiate with different styles. Those styles are shaped by the nation’s culture, political system and place in the world. African Approaches to Negotiation. Japanese, European, Latin American, German and British styles of Negotiation.

    презентация [261,2 K], добавлен 27.10.2010

  • Evaluation of urban public transport system in Indonesia, the possibility of its effective development. Analysis of influence factors by using the Ishikawa Cause and Effect diagram and also the use of Pareto analysis. Using business process reengineering.

    контрольная работа [398,2 K], добавлен 21.04.2014

  • The audience understand the necessity of activity planning and the benefits acquired through budgeting. The role of the economic planning department. The main characteristics of the existing system of planning. The master budget, the budgeting process.

    презентация [1,3 M], добавлен 12.01.2012

  • Organizational structure: types of organizational structures (line organizations, line-and-Stuff organizations, committee and matrix organization). Matrix organization for a small and large business: An outline, advantages, disadvantages, conclusion.

    реферат [844,8 K], добавлен 20.03.2011

  • Milestones and direction of historical development in Germany, its current status and value in the world. The main rules and principles of business negotiations. Etiquette in management of German companies. The approaches to the formation of management.

    презентация [7,8 M], добавлен 26.05.2015

  • Critical literature review. Apparel industry overview: Porter’s Five Forces framework, PESTLE, competitors analysis, key success factors of the industry. Bershka’s business model. Integration-responsiveness framework. Critical evaluation of chosen issue.

    контрольная работа [29,1 K], добавлен 04.10.2014

  • Formation of intercultural business communication, behavior management and communication style in multicultural companies in the internationalization and globalization of business. The study of the branch of the Swedish-Chinese company, based in Shanghai.

    статья [16,2 K], добавлен 20.03.2013

  • Философия управления японским предприятием. Практика решения социально-трудовых проблем в Японии. Анализ производственной системы tps (toyota production system), трудовые отношения в компании. Проблема адаптации японской модели трудовых отношений в РФ.

    курсовая работа [38,8 K], добавлен 16.09.2017

Работы в архивах красиво оформлены согласно требованиям ВУЗов и содержат рисунки, диаграммы, формулы и т.д.
PPT, PPTX и PDF-файлы представлены только в архивах.
Рекомендуем скачать работу.