Download presentation
Presentation is loading. Please wait.
1
Web Mining and Recommendation
CENG 514 November 24, 2018
2
Web Mining Web Mining is the use of the data mining techniques to automatically discover and extract information from web documents/services
3
Examples of Discovered Patterns
Association rules 75% of Facebook users also have FourSquare accounts Classification People with age less than 40 and salary > 40k trade on-line Clustering Users A and B access similar URLs Outlier Detection User A spends more than twice the average amount of time surfing on the Web
4
Why is Web Mining Different?
The Web is a huge collection of documents except for Hyper-link information Access and usage information The Web is very dynamic New pages are constantly being generated Challenge: Develop new Web mining algorithms and adapt traditional data mining algorithms to Exploit hyper-links and access patterns Be incremental
5
Web Mining Applications
E-commerce Generate user profiles Targetted advertizing Fraud … Information retrieval (Search) on the Web Automated generation of topic hierarchies Web knowledge bases Extraction of schema for XML documents … Network Management Performance management Fault management
6
User Profiling Important for improving customization
Provide users with pages, advertisements of interest Generate user profiles based on their access patterns Cluster users based on frequently accessed URLs Use classifier to generate a profile for each cluster Engage technologies Tracks web traffic to create anonymous user profiles of Web surfers
7
Internet Advertizing Ads are a major source of revenue for Web portals and E-commerce sites Plenty of startups doing internet advertizing Doubleclick, AdForce, AdKnowledge
8
Internet Advertizing Scheme 1: Scheme 2:
Manually associate a set of ads with each user profile For each user, display an ad from the set based on profile Scheme 2: Automate association between ads and users Use ad click information to cluster users (each user is associated with a set of ads that he/she clicked on) For each cluster, find ads that occur most frequently in the cluster and these become the ads for the set of users in the cluster
9
Internet Advertizing Use collaborative filtering (e.g. Likeminds, Firefly) Each user Ui has a rating for a subset of ads (based on click information, time spent, items bought etc.) Rij - rating of user Ui for ad Aj Problem: Predict user Ui’s rating for an unrated ad Aj A1 A2 A3 ?
10
Internet Advertizing Key Idea: User Ui’s rating for ad Aj is set to Rkj, where Uk is the user whose rating of ads is most similar to Ui’s User Ui’s rating for an ad Aj that has not been previously displayed to Ui is computed as follows: Consider a user Uk who has rated ad Aj Compute Dik, the distance between Ui and Uk’s ratings on common ads Ui’s rating for ad Aj = Rkj (Uk is user with smallest Dik) Display to Ui ad Aj with highest computed rating
11
Fraud With the growing popularity of E-commerce, systems to detect and prevent fraud on the Web become important Maintain a signature for each user based on buying patterns on the Web (e.g., amount spent, categories of items bought) If buying pattern changes significantly, then signal fraud E.g. use of domain knowledge and neural networks for credit card fraud detection
12
Network Management Performance management : Annual bandwidth demand is increasing ten-fold on average, annual bandwidth supply is rising only by a factor of three. Result is frequent congestion. During a major event (World cup), an overwhelming number of user requests can result in millions of redundant copies of data flowing back and forth across the world Fault management: Analyze alarm and traffic data to carry out root cause analysis of faults
13
Web Mining Web Content Mining Web Structure Mining Web Usage Mining
Web page content mining Search result mining Web Structure Mining Search Web Usage Mining Access patterns Customized Usage patterns November 24, 2018
14
Web Content Mining Crawler Focused crawlers
A program that traverses the hypertext structure in the Web Seed URL: page/set of pages that the crawler starts with Links from visited page saved in a queue Build an index Focused crawlers November 24, 2018
15
Basic Measures for Text Retrieval
Relevant Relevant & Retrieved Retrieved All Documents Precision: the percentage of retrieved documents that are in fact relevant to the query (i.e., “correct” responses) Recall: the percentage of documents that are relevant to the query and were, in fact, retrieved 11/24/2018
16
Information Retrieval Techniques
Basic Concepts A document can be described by a set of representative keywords called index terms. Different index terms have varying relevance when used to describe document contents. This effect is captured through the assignment of numerical weights to each index term of a document. (e.g.: frequency, tf-idf) DBMS Analogy Index Terms Attributes Weights Attribute Values 11/24/2018
17
Indexing Inverted index
A data structure for supporting text queries, similar to index in a book document_table: a set of document records <doc_id, postings_list> term_table: a set of term records, <term, postings_list> Answer query: Find all docs associated with one or a set of terms + easy to implement – do not handle well synonymy and polysemy, and posting lists could be too long (storage could be very large) 11/24/2018
18
Inverted index inverted index indexing disks with documents
aalborg , , ….. . arm , 19, 29, 98, 143, ... armada , 457, 789, ... armadillo , 2134, 3970, ... armani , 256, 372, 511, ... zz , 1189, 3209, ... indexing disks with documents inverted index
19
Vector Space Model Documents and user queries are represented as m-dimensional vectors, where m is the total number of index terms in the document collection. The degree of similarity of the document d with regard to the query q is calculated as the correlation between the vectors that represent them, using measures such as the Euclidian distance or the cosine of the angle between these two vectors. 11/24/2018
20
Vector Space Model Term: basic concept, e.g., word or phrase
Represent a doc by a term vector Term: basic concept, e.g., word or phrase Each term defines one dimension N terms define a N-dimensional space Element of vector corresponds to term weight E.g., d = (x1,…,xN), xi is “importance” of term i 11/24/2018
21
VS Model: Illustration
Java Microsoft Starbucks New document is assigned to the most likely category based on vector similarity. C2 Category 2 C3 Category 3 C1 Category 1 new doc 11/24/2018
22
Issues to be handled How to select terms to capture “basic concepts”
Word stopping e.g. “a”, “the”, “always”, “along” Word stemming e.g. “computer”, “computing”, “computerize” => “compute” Latent semantic indexing How to assign weights Not all words are equally important: Some are more indicative than others e.g. “algebra” vs. “science” How to measure the similarity 11/24/2018
23
Latent Semantic Indexing
Basic idea Similar documents have similar word frequencies Difficulty: the size of the term frequency matrix is very large Use a singular value decomposition (SVD) techniques to reduce the size of frequency table Retain the K most significant rows of the frequency table Method Create a term x document weighted frequency matrix A SVD construction: A = U * S * V’ Define K and obtain Uk ,, Sk , and Vk. Create query vector q’ . Project q’ into the term-document space: Dq = q’ * Uk * Sk-1 Calculate similarities: cos α = Dq . D / ||Dq|| * ||D|| 11/24/2018
24
How to Assign Weights Two-fold heuristics based on frequency
TF (Term frequency) More frequent within a document more relevant to semantics IDF (Inverse document frequency) Less frequent among documents more discriminative 11/24/2018
25
TF Weighting Weighting: More frequent => more relevant to topic
Raw TF= f(t,d): how many times term t appears in doc d Normalization: Document length varies => relative frequency preferred e.g., Maximum frequency normalization 11/24/2018
26
IDF Weighting Ideas: Formula: IDF(t) = 1+ log (n/k)
Less frequent among documents more discriminative Formula: IDF(t) = 1+ log (n/k) n: total number of docs k: # docs with term t appearing 11/24/2018
27
TF-IDF Weighting TF-IDF weighting : weight(t, d) = TF(t, d) * IDF(t)
Frequent within doc high tf high weight Selective among docs high idf high weight Recall VS model Each selected term represents one dimension Each doc is represented by a feature vector Its t-term coordinate of document d is the TF-IDF weight Many complex and more effective weighting variants exist in practice 11/24/2018
28
How to Measure Similarity?
Given two documents Similarity definition dot product normalized dot product (or cosine) 11/24/2018
29
Illustrative Example doc1 doc2
text mining search engine travel map government president congress doc1 doc2 …… Sim(newdoc,doc1)=2*4.8* *4.5 Sim(newdoc,doc2)=2.4*2.4 Sim(newdoc,doc3)=0 text mining travel map search engine govern president congress IDF doc1 2(4.8) 1(4.5) (2.1) 1(5.4) doc2 1(2.4 ) (5.6) 1(3.3) doc (2.2) 1(3.2) (4.3) newdoc 1(2.4) 1(4.5) 11/24/2018
30
Web Structure Mining PageRank (Google ’00) Clever (IBM ’99)
November 24, 2018
31
Search Engine – Two Rank Functions
Ranking based on link structure analysis Web Pages Meta Data Forward Index Inverted Link Backward Link (Anchor Text) Web Topology Graph Web Page Parser Indexer Anchor Text Generator Web Graph Constructor Importance Ranking (Link Analysis) Rank Functions URL Dictioanry Term Dictionary (Lexicon) Search Relevance Ranking Similarity based on content or text 11/24/2018
32
The PageRank Algorithm
Intuition: PageRank can be seen as the probability that a “random surfer” visits a page Brin, S. and Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. In Proc. WWW Conference, pages 107–117 Basic idea: significance of a page is determined by the significance of the pages linking to it Link i→j : i considers j important. the more important i, the more important j becomes. if i has many out-links: links are less important. Initially: all importances pi = 1. Iteratively, pi is refined. PR(A) = p + (1-p)(PR(T1)/C(T1) + … + PR(Tn)/C(Tn)) where, C(Ti) = # out-links of page i Parameter p is probability that the surfer gets bored and starts on a new random page (1-p) is the probability that the random surfer follows a link on current page 11/24/2018
33
The HITS Algorithm Hyperlink-induced topic search (HITS)
Kleinberg, J. M. (1999). Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–632. Basic idea: Sufficiently broad topics contain communities consisting of two types of hyperlinked pages: Authority: best source for requested info, highly-referenced pages on a topic Hub: contains links to authoritative pages 11/24/2018 33 33
34
The HITS Algorithm Collect seed set of pages S (returned by search engine) Expand seed set to contain pages that point to or are pointed to by pages in seed set (removes links inside a site) Iteratively update hub weight h(p) and authority weight a(p) for each page: a (p )= ∑ h (q ), for all q p h (p )= ∑ a (q), for all p q After a fixed number of iterations, pages with highest hub/authority weights form core of community 11/24/2018 34 34
35
Problems with Web Search Today
Today’s search engines are plagued by problems: the abundance problem (99% of info of no interest to 99% of people) limited coverage of the Web (internet sources hidden behind search interfaces) Largest crawlers cover < 18% of all web pages limited query interface based on keyword-oriented search limited customization to individual users
36
Problems with Web Search Today
Today’s search engines are plagued by problems: Web is highly dynamic Lot of pages added, removed, and updated every day Very high dimensionality
37
Web Usage Mining Pages contain information Links are ‘roads’
How do people navigate the Internet Web Usage Mining (clickstream analysis) Information on navigation paths available in log files Logs can be mined from a client or a server perspective
38
Website Usage Analysis
Why analyze Website usage? Knowledge about how visitors use Website could Provide guidelines to web site reorganization; Help prevent disorientation Help designers place important information where the visitors look for it Pre-fetching and caching web pages Provide adaptive Website (Personalization) Questions which could be answered What are the differences in usage and access patterns among users? What user behaviors change over time? How usage patterns change with quality of service (slow/fast)? What is the distribution of network traffic over time?
39
Website Usage Analysis
40
Website Usage Analysis
There are analysis services such as Analog ( Google analytics Gives basic statistics such as number of hits average hits per time period what are the popular pages in your site who is visiting your site what keywords are users searching for to get to you what is being downloaded
41
Web Usage Mining Process
42
Data Mining: Concepts and Techniques
Data Preparation Data cleaning By checking the suffix of the URL name, for example, all log entries with filename suffixes such as, gif, jpeg, etc User identification If a page is requested that is not directly linked to the previous pages, multiple users are assumed to exist on the same machine Other heuristics involve using a combination of IP address, machine name, browser agent, and temporal information to identify users Transaction identification All of the page references made by a user during a single visit to a site Size of a transaction can range from a single page reference to all of the page references November 24, 2018 Data Mining: Concepts and Techniques
43
Data Mining: Concepts and Techniques
Sessionizing Main Questions: how to identify unique users how to identify/define a user transaction Problems: user ids are often suppressed due to security concerns individual IP addresses are sometimes hidden behind proxy servers client-side & proxy caching makes server log data less reliable Standard Solutions/Practices: user registration – practical ???? client-side cookies – not fool proof cache busting - increases network traffic November 24, 2018 Data Mining: Concepts and Techniques
44
Data Mining: Concepts and Techniques
Sessionizing Time oriented By total duration of session not more than 30 minutes By page stay times (good for short sessions) not more than 10 minutes per page Navigation oriented (good for short sessions and when timestamps unreliable) Referrer is previous page in session, or Referrer is undefined but request within 10 secs, or Link from previous to current page in web site November 24, 2018 Data Mining: Concepts and Techniques
45
Data Mining: Concepts and Techniques
Web Usage Mining Different Types of Traversal Patterns Association Rules Which pages are accessed together Support(X) = freq(X) / no of transactions Episodes Frequent partially order set of pages Support(X) = freq(X) / no of time windows Sequential Patterns Frequent ordered set of pages Support(X) = freq(X) / no of sessions/customers Forward Sequences Removes backward traversals, reloads, refreshes E.g. <A,B,A,C> <A,B> and <A,C> Support(X) = freq(X) / no of forward sequences Maximal Forward Sequences Support(X) = freq(X) / no of clicks Clustering User clusters (similar navigational behaviour) Page clusters (grouping conceptually related pages) November 24, 2018 Data Mining: Concepts and Techniques
46
Data Mining: Concepts and Techniques
Recommender Systems November 24, 2018 Data Mining: Concepts and Techniques
47
Recommender Systems RS – problem of information filtering
RS – problem of machine learning seeks to predict the 'rating' that a user would give to an item she/he had not yet considered. Enhance user experience Assist users in finding information Reduce search and navigation time
48
Types of RS Three broad types: Content based RS Collaborative RS
Hybrid RS
49
Types of RS – Content based RS
Content based RS highlights Recommend items similar to those users preferred in the past User profiling is the key Items/content usually denoted by keywords Matching “user preferences” with “item characteristics” … works for textual information Vector Space Model widely used
50
Types of RS – Content based RS
Content based RS - Limitations Not all content is well represented by keywords, e.g. images Items represented by the same set of features are indistinguishable Users with thousands of purchases is a problem New user: No history available
51
Types of RS – Collaborative RS
Collaborative RS highlights Use other users recommendations (ratings) to judge item’s utility Key is to find users/user groups whose interests match with the current user Vector Space model widely used (directions of vectors are user specified ratings) More users, more ratings: better results Can account for items dissimilar to the ones seen in the past too Example: Movielens.org
52
Types of Collaborative Filtering
User-based collaborative filtering Item-based collaborative filtering
53
User-based Collaborative Filtering
Idea: People who agreed in the past are likely to agree again To predict a user’s opinion for an item, use the opinion of similar users Similarity between users is decided by looking at their overlap in opinions for other items
54
Example: User-based Collaborative Filtering
Item 1 Item 2 Item 3 Item 4 Item 5 User 1 8 1 ? 2 7 User 2 5 User 3 4 User 4 3 User 5 6 User 6
55
Similarity between users
Item 1 Item 2 Item 3 Item 4 Item 5 User 1 8 1 ? 2 7 User 2 5 User 4 3 How similar are users 1 and 2? How similar are users 1 and 5? How do you calculate similarity?
56
Similarity between users: simple way
Item 1 Item 2 Item 3 Item 4 Item 5 User 1 8 1 ? 2 7 User 2 5 Only consider items both users have rated For each item: Calculate difference in the users’ ratings Take the average of this difference over the items Average j : Item j rated by User 1 and User 2: | rating (User 1, Item j) – rating (User 2, Item j) |
57
Algorithm 1: using entire matrix
5 7 7 Aggregation function: often weighted sum Picture shows six users, our target user in middle (with red circle indicating them), distance between users based on how similar they are. Numbers in yellow boxes are rating by users for Item 3. Ratings of all other users are used by an aggregation function (often a weighted sum) to decide on predicted rating for our target user. Weight depends on similarity 8 4
58
Algorithm 2: K-Nearest-Neighbour
Neighbours are people who have historically had the same taste as our user 5 7 7 Aggregation function: often weighted sum Picture shows six users, our target user in middle (with red circle indicating them), distance between users based on how similar they are. Blue area around target user shows nearest neighbours (including two of our users in this case). Numbers in yellow boxes are rating by users for Item 3. Ratings of nearest neighbours are used by an aggregation function (often a weighted sum) to decide on predicted rating for our target user. Weight depends on similarity 8 4
59
Item-based Collaborative Filtering
Idea: a user is likely to have the same opinion for similar items [same idea as in Content-Based Filtering] Similarity between items is decided by looking at how other users have rated them [different from Content-based, where item features are used] Advantage (compared to user-based CF): Prevents User Cold-Start problem Improves scalability (similarity between items is more stable than between users)
60
Example: Item-based Collaborative Filtering
User 1 8 1 ? 2 7 User 2 5 User 3 4 User 4 3 User 5 6 User 6
61
Similarity between items
? 2 7 5 4 3 8 6 How similar are items and 4? How similar are items 3 and 5? How do you calculate similarity? Each row in the table are the ratings one user on the items
62
Similarity between items: simple way
? 2 5 7 4 3 6 8 Only consider users who have rated both items For each user: Calculate difference in ratings for the two items Take the average of this difference over the users Average i : User i has rated Items 3 and 4: | rating (User i, Item 3) – rating (User i, Item 4) | Each row in the table are the ratings one user on the items
63
Aggregation function: often weighted sum
Algorithms As User-Based: can use nearest-neighbours or all Item 2 8 1 Aggregation function: often weighted sum Item 1 Item 3 Item 5 Showing five items, Item 3 is the one we need to know for User 1. Distances to Item 3 indicate similarity. Numbers in yellow boxes give ratings for User 1 for other items. Blue area shows nearest neighbours, items that are most similar to Item 3 based on past ratings by other users. Weight depends on similarity 7 Item 4 2
64
Types of RS – Collaborative RS
Collaborative RS - Limitations Different users might use different scales. Possible solution: weighted ratings, i.e. deviations from average rating Finding similar users/user groups isn’t very easy New user: No preferences available (user cold start problem) New item: No ratings available (item cold start problem) Demographic filtering is required
65
Some ways to make a Hybrid RS
Weighted. Ratings of several recommendation techniques are combined together to produce a single recommendation Switching. The system switches between recommendation techniques depending on the current situation Mixed. Recommendations from several different recommenders are presented simultaneously (e.g. Amazon) Cascade. One recommender refines the recommendations given by another
66
Model-based collaborative filtering
Instead of using ratings directly, develop a model of user ratings Use the model to predict ratings for new items To build the model: Bayesian network (probabilistic) Clustering (classification) Rule-based approaches (e.g., association rules between co-purchased items)
67
Model-based collaborative filtering
Cluster Models Create clusters or groups Put a customer into a category Classification simplifies the task of user matching More scalability and performance Lesser accuracy than normal collaborative filtering method
68
Possible Improvement in RS
Better understanding of users and items Social network (social RS) User level Highlighting interests, hobbies, and keywords people have in common Item level link the keywords to eCommerce
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.