Collaborative filtering

  • Currently 0.00/5

Rating: 0.0/5 (0 votes cast) login to rate

Add to Favorite Print This Page Publish on Twitter
Bookmark and Share

Collaborative filtering (CF) is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many different kinds of data including sensing and monitoring data - such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data - such as financial service institutions that integrate many financial sources; or in electronic commerce and web 2.0 applications where the focus is on user data, etc. The remainder of this discussion focuses on collaborative filtering for user data, although some of the methods and approaches may apply to the other major applications as well.

The method of making automatic predictions (filtering) about the interests of a user by collecting taste information from many users (collaborating). The underlying assumption of CF approach is that those who agreed in the past tend to agree again in the future. For example, a collaborative filtering or recommendation system for television tastes could make predictions about which television show a user should like given a partial list of that user's tastes (likes or dislikes)[1]. Note that these predictions are specific to the user, but use information gleaned from many users. This differs from the simpler approach of giving an average (non-specific) score for each item of interest, for example based on its number of votes.



Collaborative filtering systems have many forms, but many common systems can be reduced to two steps:

  1. Look for users who share the same rating patterns with the active user (the user whom the prediction is for).
  2. Use the ratings from those like-minded users found in step 1 to calculate a prediction for the active user

This falls under the category of user-based collaborative filtering. A specific application of this is the user-based Nearest Neighbor algorithm.

Alternatively, item-based collaborative filtering popularized by (users who bought x also bought y) and first proposed in the context of rating-based collaborative filtering by Vucetic and Obradovic in 2000[2], proceeds in an item-centric manner:

  1. Build an item-item matrix determining relationships between pairs of items
  2. Using the matrix, and the data on the current user, infer his taste

See, for example, the Slope One item-based collaborative filtering family.

Another form of collaborative filtering can be based on implicit observations of normal user behavior (as opposed to the artificial behavior imposed by a rating task). In these systems you observe what a user has done together with what all users have done (what music they have listened to, what items they have bought) and use that data to predict the user's behavior in the future or to predict how a user might like to behave if only they were given a chance. These predictions then have to be filtered through business logic to determine how these predictions might affect what a business system ought to do. It is, for instance, not useful to offer to sell somebody some music if they already have demonstrated that they own that music or, considering another example, it is not useful to suggest more travel guides for Paris to someone who already bought a travel guide for this city.

In the age of information explosion such techniques can prove very useful as the number of items in only one category (such as music, movies, books, news, web pages) have become so large that a single person cannot possibly view them all in order to select relevant ones. Relying on a scoring or rating system which is averaged across all users ignores specific demands of a user, and is particularly poor in tasks where there is large variation in interest, for example in the recommendation of music. However, there are other methods to combat information explosion, for example web search, data clustering, and more.


Collaborative filtering stems from the earlier system of information filtering, where relevant information is brought to the attention of the user by observing patterns in previous behaviour and building a user profile. This system was essentially unable to help with exploration of the web and suffered from the cold-start problem that new users had to build up tendencies before the filtering was effective.

The first system to use collaborative filtering was the Information Tapestry project at Xerox PARC [3]. This system allowed users to find documents based on previous comments by other users. There were many problems with this system as it only worked for small groups of people and had to be accessed through word specific queries which largely defeated the purpose of collaborative filtering.

The first system with proven results was the Bellcore Video Recommender [4]

USENET Net news furthered collaborative filteringTemplate:Dubious such that it was available for a mass scale of users while having a simpler method for accessing articles. The system allowed users to rate material based on popularity, which then allowed other users to search for articles based on these ratings.

One of the largest early collaborative filtering services for music recommendations widely available on the World Wide Web[citation needed] was Firefly, which evolved from early MIT Media Lab research projects.[5][6][7] Firefly was bought by Microsoft in 1998. The service itself was closed down in 1999 with much of its technology and staff helping to create Microsoft Passport.



This mechanism uses user rating data to compute similarity between users or items. This is used for making recommendations. This was the earlier mechanism and is used in many commercial systems. It is easy to implement and is effective. Typical examples of this mechanism are neighborhood based CF and item-based/user-based top-N recommendations[8].

The neighborhood-based algorithm calculates the similarity between two users or items, produces a prediction for the user taking the weighted average of all the ratings. Similarity computation between items or users is an important part of this approach. Multiple mechanisms such as Pearson correlation and vector cosine based similarity are used for this.

The user based top-N recommendation algorithm identifies the k most similar users to an active user using similarity based vector model. After the k most similar users are found, their corresponding user-item matrices are aggregated to identify the set of items to be recommended. A popular method to find the similar users is the Locality sensitive hashing, which implements the nearest neighbor mechanism in linear time.

The advantages with this approach is the explainability of the results, which is an important aspect of recommendation systems. It is easy to create and use. New data can be added easily and incrementally. It need not consider the content of the items being recommended. The mechanism scales well with co-rated items.

There are several disadvantages with this approach. First, it depends on human ratings. Second, its performance decreases when data gets sparse, which is frequent with web related items. This prevents the scalability of this approach and has problems with large datasets. Third, it cannot handle new users or new items.


Models are developed using data mining, machine learning algorithms to find patterns based on training data. These are used to make predictions for real data. There are many model based CF algorithms. These include Bayesian Networks, clustering models, latent semantic models such as singular value decomposition, probabilistic latent semantic analysis, Multiple Multiplicative Factor, Latent Dirichlet allocation, markov decision process based models[8].

This approach has a more holistic goal to uncover latent factors that explain observed ratings[9]. Most of the models are based on creating a classification or clustering technique to identify the user based on the test set. The number of the parameters can be reduced based on types of principal component analysis.

There are several advantages with this paradigm. It handles the sparsity better than memory based ones. This helps with scalability with large data sets. It improves the prediction performance. It gives an intuitive rationale for the recommendations.

The disadvantages with this approach are in the expensive model building. One needs to have a tradeoff between prediction performance and scalability. One can lose useful information due to reduction models. A number of models have difficulty explaining the predictions.


A number of applications combines the memory-based and the model-based CF algorithms. These overcome the limitations of native CF approaches. It improves the prediction performance. Importantly, it overcomes the CF problems such as sparsity and loss of information. However, they have increased complexity and are expensive to implement[10].


In commercial systems

Commercial sites that implement collaborative filtering systems include:

In non-commercial systems

Non-commercial sites that implement collaborative filtering systems include:

Service Type
AmphetaRate RSS articles
Everyone's a Critic movies websites
Gnomoradio music (free)
MovieLens movies
Filmaster movies (free)
Rate Your Music music

Software libraries

Template:Externallinks Below are links to software libraries that allow developers to add collaborative filtering to applications or web sites:


  • New algorithms have been developed for CF as a result of the NetFlix prize.
  • Cross-System Collaborative Filtering where user profiles across multiple recommender systems are combined in a privacy preserving manner.
  • Robust Collaborative Filtering, where recommendation is stable towards efforts of manipulation. This research area is still active and not completely solved.[11]

See also


  1. An integrated approach to TV Recommendations by TV Genius
  2. A Regression-Based Approach for Scaling-Up Personalized Recommender Systems in E-Commerce
  3. Goldberg, David; David Nichols, Brain M. Oki, Douglas Terry (1992). [Expression error: Missing operand for > "Using collaborative filtering to weave an information tapestry"]. Communications of the ACM 35 (12): 61–70. doi:10.1145/138859.138867. ISSN 0001-0782. 
  4. Hill Will et al. (1995). "Recommending and evaluating choices in a virtual community of use". CHI '95: Proceedings of the SIGCHI conference on Human factors in computing systems, Denver, Colorado, United States. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co.. pp. 194--201. doi:10.1145/223904.223929. ISBN 0-201-84705-1. 
  5. Lambert, Laura; Hilary W. Poole, Chris Woodford, Christos J. P. Moschovitis (2005). The Internet: A Historical Encyclopedia. ABC-CLIO. pp. 162ff. ISBN 1851096590. 
  6. Moya K. Mason, Short History of Collaborative Filtering
  7. Jerry Michalski, Collaborative Filters. Esther Dyson's Monthly Report, 19 November 1996
  8. 8.0 8.1 Xiaoyuan Su, Taghi M. Khoshgoftaar, A survey of collaborative filtering techniques, Advances in Artificial Intelligence archive, 2009.
  9. Factor in the Neighbors: Scalable and Accurate Collaborative Filtering
  10. Google News Personalization: Scalable Online Collaborative Filtering

External links


Wikipedia-logo.png The basis of this article is contributed from Wikipedia.Org. These articles are licensed under the GNU Free Documentation License It may have since been edited beyond all recognition. But we thank Wikipedia for allowing its use.
Please discuss further on the talk page.

de:Kollaboratives Filtern

fr:Filtrage collaboratif ko:협업 필터링 it:Collaborative filtering ja:協調フィルタリング ru:Коллаборативная фильтрация zh:協同過濾

  • Currently 0.00/5

Rating: 0.0/5 (0 votes cast) login to rate

Add to Favorite Print This Page Publish on Twitter
Bookmark and Share
close about Number of comments per page:
Time format: relative absolute
You need JavaScript enabled for viewing comments