Recommending from big data → Big Data Partnership → Unlock Value from Complex Data

Recommending from big data

As the research on core recommender systems progresses and matures, it becomes clear that a fundamental issue for these algorithms is to determine how to embed the core techniques in real operational systems and how to deal with massive and dynamic sets of data. Recommender system algorithms are very effective in identifying and predicting user preferences based on explicit or implicit indication of preference that the user provided in the past,  but dealing with issues such as scalability has not been a primary focus in mainstream recommender research. When dealing with scalability we can differentiate between two interconnected issues.

  • Computational efficiency:  The table below shows the computational complexity of the most popular recommender system algorithms. This complexity mainly depends on the number of items (n) and number of users (m) in the system, also, for matrix factorization techniques the number of features (k) that are used to represent the users/items. This is an important measure that could be used to calculate how much computational resources are needed to train the model and provide predictions. Also an important factor when the number of users or items in the system start to skyrocket an the recommnder service provider have to find the enough resources to process the data, in that case it would make a huge difference whether the algorithm runs in linear or quadratic time.
Algorithm
Training
Prediction
User-based - O(mn)
Item-based O(mn2) O(n)
Similarity fusion O(n2m+m2n) O(mn)
Regression-based O(mn2) O(n)
Slope one O(mn2) O(n)
LSI/SVD O((m+n)3) O(1)
RSVD O(mnk) O(1)
SVD++ O(mn2k) O(1)

There are a number of things that this tables does not show. As the user provides a new rating enters in the system, depending on the algorithm, not all the previously learnt model needs to be updated. For example, in the case of item-based algorithm, it was observed that the similarity between items does not change considerably over time, so the algorithm only needs to be updated less frequently. There are other approaches (e.g. online learning) that would help saving resources by ‘folding in’ the new data and partially updating the model

  • Distributability: Essentially, this is a crucial part to make recommender systems truly scalable. Without being able to break down the algorithm into small and repeatable parts, it is not possible to implement them in a distributed environment such as Hadoop. With some algorithms such as the user-based and item-based approaches, it is easy to see how they can be implemented in a distributable fashion, as each part of the computation is independent from the other.  In other cases, such as matrix factorizations, where the model is learnt through propagating information, this is more problematic. The designer have to make compromises and fix part of the learning to be able to distribute it.

I see more and more research towards this direction and many classical algorithms are being rethought and implemented for distributed systems.

Posted on February 7, 2012 in Blog, Science

Share the Story

Responses (2)

  1. Mahesh
    February 13, 2012 at 1:10 pm ·

    Hi Tamas, I saw that there is a latest version of Mahout 0.6 is released recently this month (06th feb 2012) and found there are many changes in regards to clustering and other things too, like SSVD enhancements. Hope there will be some drastic improvements in recommender algorithms as well, as you are referring to.
    What do you mean by “online learning�? and can you give me more insight on this?

  2. Tamas
    February 14, 2012 at 10:38 am ·

    I meant the online learning approaches, when the model learns as the data flows in. The great advantage of them is that they only re-train part of the model, so that the new data is incorporated into the model straight away.

Leave a reply

Back to Top