Presentation is loading. Please wait.

Presentation is loading. Please wait.

Collective Intelligence Week 3: Crawling, Searching, Ranking Old Dominion University Department of Computer Science CS 795/895 Spring 2009 Michael L. Nelson.

Similar presentations


Presentation on theme: "Collective Intelligence Week 3: Crawling, Searching, Ranking Old Dominion University Department of Computer Science CS 795/895 Spring 2009 Michael L. Nelson."— Presentation transcript:

1 Collective Intelligence Week 3: Crawling, Searching, Ranking Old Dominion University Department of Computer Science CS 795/895 Spring 2009 Michael L. Nelson 1/28/09

2 Crawling is Messy… >>> import simple >>> pagelist=['http://www.cs.odu.edu/'] >>> crawler=simple.crawler('') >>> crawler.crawl(pagelist) Indexing http://www.cs.odu.edu/ Indexing http://www.odu.edu/oduhome/policies.shtml Indexing https://sysweb.cs.odu.edu/online Indexing https://exchange.cs.odu.edu/ Indexing http://system.cs.odu.edu/?page=faq Indexing http://www.cs.odu.edu/facilities.shtml Indexing http://www.cs.odu.edu/phdvod.shtml Indexing http://www.ncstrl.org/ Indexing http://system.cs.odu.edu/?page=faq&id=labhours Indexing http://www.cs.odu.edu/research_oppor.shtml Could not open http://www.cs.odu.edu/../stats/awstats.pl Indexing http://www.cs.odu.edu/course_info_ug.shtml Indexing http://www.odu.edu/af/finaid/index.shtml … Indexing http://system.cs.odu.edu/?page=services-vclab Traceback (most recent call last): File " ", line 1, in File "simple.py", line 52, in crawl … Not every URL will open Not every URL will parse Note: this is the code from pp. 55-57, not the final, distributed version

3 Crawling Our Local Web >>> import searchengine >>> crawler=searchengine.crawler('mln.db') >>> crawler.createindextables() >>> pagelist=['http://www.cs.odu.edu/'] >>> crawler.crawl(pagelist) Indexing http://www.cs.odu.edu/ Indexing http://www.odu.edu/oduhome/policies.shtml Indexing https://sysweb.cs.odu.edu/online Indexing https://exchange.cs.odu.edu/ Indexing http://system.cs.odu.edu/?page=faq Indexing http://www.cs.odu.edu/facilities.shtml Indexing http://www.cs.odu.edu/phdvod.shtml Indexing http://www.ncstrl.org/ Indexing http://system.cs.odu.edu/?page=faq&id=labhours Indexing http://www.cs.odu.edu/research_oppor.shtml Could not open http://www.cs.odu.edu/../stats/awstats.pl Indexing http://www.cs.odu.edu/course_info_ug.shtml Indexing http://www.odu.edu/af/finaid/index.shtml … three changes to distributed code needed: 1.s/Null/None/ 2.s/separateWords/separatewords/ 3.check spacing on last line in crawl()

4 Processing The Page 1.Get the page 2.If HTML, create a “soup” 3.strip out HTML of soup (all terms in 1 string) 4.parse separate terms out of the string & store in index

5 Schema for the Processed Pages def createindextables(self): self.con.execute('create table urllist(url)') self.con.execute('create table wordlist(word)') self.con.execute('create table wordlocation(urlid,wordid,location)') self.con.execute('create table link(fromid integer,toid integer)') self.con.execute('create table linkwords(wordid,linkid)') self.con.execute('create index wordidx on wordlist(word)') self.con.execute('create index urlidx on urllist(url)') self.con.execute('create index wordurlidx on wordlocation(wordid)') self.con.execute('create index urltoidx on link(toid)') self.con.execute('create index urlfromidx on link(fromid)') self.dbcommit()

6 Searching Our Index >>> e=searchengine.searcher('mln.db') >>> e.getmatchrows('old dominion') (lots of results) >> e.getmatchrows('monarch') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=1007 ([(23, 371), (3, 609)], [1007]) >>> e.query('monarch') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=1007 0.000000http://system.cs.odu.edu/?page=faq&id=labhours 0.000000http://www.odu.edu/oduhome/azindex.shtml ([1007], [23, 3]) >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 0.000000http://www.cs.odu.edu/recent_presentations.shtml 0.000000http://www.cs.odu.edu/recent_publications.shtml 0.000000http://www.cs.odu.edu/recent_grants.shtml 0.000000http://www.cs.odu.edu/Grad_List_2004-05.htm 0.000000http://www.cs.odu.edu/awards_honors.shtml 0.000000http://www.cs.odu.edu/faculty.shtml 0.000000http://www.cs.odu.edu/~ibl/sum09all.html 0.000000http://www.cs.odu.edu/~ibl/spr09all.html ([2297], [49, 48, 47, 46, 34, 20, 14, 13]) N.B.: weights dict is null: weights=[]

7 We Can Do SQL >>> cur=e.con.execute('select * from wordlist') >>> for i in range(3): print cur.next()... (u'doctype',) (u'html',) (u'public',) >>> cur=e.con.execute('select url from urllist') >>> for i in range(8): print cur.next()... (u'http://www.cs.odu.edu/',) (u'http://www.odu.edu/',) (u'http://www.odu.edu/oduhome/azindex.shtml',) (u'http://www.odu.edu/oduhome/directories.shtml',) (u'http://sci.odu.edu',) (u'http://www.cs.odu.edu/program_info.shtml',) (u'http://www.cs.odu.edu/course_information.shtml',) (u'http://www.cs.odu.edu/program_info_ug.shtml',)

8 No Stemming in Our Database >>> e.query('test') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=363 0.000000http://www.cs.odu.edu/faq.shtml 0.000000http://www.cs.odu.edu/recent_presentations.shtml 0.000000http://www.cs.odu.edu/teaching_metrics.htm 0.000000http://www.cs.odu.edu/employment_oppor.shtml 0.000000http://www.cs.odu.edu/ ([363], [63, 49, 40, 16, 1]) >>> e.query('testing') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=1628 0.000000http://www.cs.odu.edu/faculty.shtml 0.000000http://www.cs.odu.edu/employment_oppor.shtml 0.000000http://www.cs.odu.edu/advise_info_ug.shtml 0.000000http://www.cs.odu.edu/course_information.shtml 0.000000http://www.odu.edu/oduhome/azindex.shtml ([1628], [20, 16, 10, 7, 3])

9 Porter Stemmer image from: http://www.comp.lancs.ac.uk/computing/research/stemming/Links/porter.htm Original paper, sample input & output, various implementations at: http://tartarus.org/~martin/PorterStemmer/

10 Ranking Currently documents are returned in order of ingest -- not good The book covers 3 (of a possible 230480283408) ranking mechanisms based on the content of the documents themselves –word frequency: the more often a word appears in the document, the more likely it is what the document is “about” cf. Term Frequency (TF in TFIDF) from last lecture –location in document: if the word appears near the “top” of the document it is more likely to capture “aboutness” ex: word in title, intro, abstract –word distance: for multi-word queries, give higher rank that feature the terms in closer proximity ex: d1=“welcome home to New York”, d2=“new homes being built in York County” q1=“new york”; rank d1,d2 q2=“new home”; rank d2, d1

11 Precision and Recall Precision –“ratio of the number of relevant documents retrieved over the total number of documents retrieved” (p. 10) –how much extra stuff did you get? Recall –“ratio of relevant documents retrieved for a given query over the number of relevant documents for that query in the database” (p. 10) note: assumes a priori knowledge of the denominator! –how much did you miss?

12 Precision and Recall 1 1 0 Precision Recall figure 1.2 in FBY an increase in 1 dimension is generally accompanied by a decrease in another. ex. stemming increases recall, but at the expense of precision

13 Why Isn’t Precision Always 100%? What were we really searching for? Science? Games? Music?

14 Why Isn’t Recall Always 100%? Virginia Agricultural and Mechanical College? Virginia Agricultural and Mechanical College and Polytechnic Institute? Virginia Polytechnic Institute? Virginia Polytechnic Institute and State University? Virginia Tech?

15 Ranking With Frequency >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 1.000000http://www.cs.odu.edu/recent_publications.shtml 0.600000http://www.cs.odu.edu/~ibl/spr09all.html 0.400000http://www.cs.odu.edu/recent_grants.shtml 0.300000http://www.cs.odu.edu/Grad_List_2004-05.htm 0.300000http://www.cs.odu.edu/awards_honors.shtml 0.100000http://www.cs.odu.edu/~ibl/sum09all.html 0.050000http://www.cs.odu.edu/recent_presentations.shtml 0.050000http://www.cs.odu.edu/faculty.shtml ([2297], [48, 13, 47, 46, 34, 14, 49, 20]) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 0.000000http://www.cs.odu.edu/recent_presentations.shtml 0.000000http://www.cs.odu.edu/recent_publications.shtml 0.000000http://www.cs.odu.edu/recent_grants.shtml 0.000000http://www.cs.odu.edu/Grad_List_2004-05.htm 0.000000http://www.cs.odu.edu/awards_honors.shtml 0.000000http://www.cs.odu.edu/faculty.shtml 0.000000http://www.cs.odu.edu/~ibl/sum09all.html 0.000000http://www.cs.odu.edu/~ibl/spr09all.html ([2297], [49, 48, 47, 46, 34, 20, 14, 13]) getscoredlist() weights=[] getscoredlist() weights=[(1.0,self.frequencyscore(rows))]

16 Ranking With Location, Location + Frequency >>> reload(searchengine) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 1.000000http://www.cs.odu.edu/recent_publications.shtml 0.921290http://www.cs.odu.edu/awards_honors.shtml 0.856115http://www.cs.odu.edu/recent_grants.shtml 0.611301http://www.cs.odu.edu/~ibl/sum09all.html 0.542553http://www.cs.odu.edu/faculty.shtml 0.423990http://www.cs.odu.edu/~ibl/spr09all.html 0.381614http://www.cs.odu.edu/recent_presentations.shtml 0.040971http://www.cs.odu.edu/Grad_List_2004-05.htm ([2297], [48, 34, 47, 14, 20, 13, 49, 46]) getscoredlist() weights=[(1.0,self.locationscore(rows))] >>> reload(searchengine) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 2.000000http://www.cs.odu.edu/recent_publications.shtml 1.256115http://www.cs.odu.edu/recent_grants.shtml 1.221290http://www.cs.odu.edu/awards_honors.shtml 1.023990http://www.cs.odu.edu/~ibl/spr09all.html 0.711301http://www.cs.odu.edu/~ibl/sum09all.html 0.592553http://www.cs.odu.edu/faculty.shtml 0.431614http://www.cs.odu.edu/recent_presentations.shtml 0.340971http://www.cs.odu.edu/Grad_List_2004-05.htm ([2297], [48, 47, 34, 13, 14, 20, 49, 46]) getscoredlist() weights=[(1.0,self.locationscore(rows)), 1.0,self.frequencyscore(rows))]

17 Ranking With Distance >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('michael nelson') select w0.urlid,w0.location,w1.location from wordlocation w0,wordlocation w1 where w0.wordid=2296 and w0.urlid=w1.urlid and w1.wordid=2297 0.000000http://www.cs.odu.edu/recent_presentations.shtml 0.000000http://www.cs.odu.edu/recent_publications.shtml 0.000000http://www.cs.odu.edu/recent_grants.shtml 0.000000http://www.cs.odu.edu/Grad_List_2004-05.htm 0.000000http://www.cs.odu.edu/faculty.shtml ([2296, 2297], [49, 48, 47, 46, 20]) >>> reload(searchengine) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('michael nelson') select w0.urlid,w0.location,w1.location from wordlocation w0,wordlocation w1 where w0.wordid=2296 and w0.urlid=w1.urlid and w1.wordid=2297 1.000000http://www.cs.odu.edu/recent_grants.shtml 1.000000http://www.cs.odu.edu/faculty.shtml 0.500000http://www.cs.odu.edu/recent_presentations.shtml 0.500000http://www.cs.odu.edu/recent_publications.shtml 0.002874http://www.cs.odu.edu/Grad_List_2004-05.htm ([2296, 2297], [47, 20, 49, 48, 46]) getscoredlist() weights=[] getscoredlist() weights=[(1.0,self.distancecore(rows))] 2 “Michael” + 1 “Nelson”, but not in proximity (the “Nelson” is not even visible…)

18 Link-Based Metrics Content based metrics have an implicit assumption: everyone is telling the truth! –Lynch, “When Documents Deceive” http://scholar.google.com/scholar?cluster=4682764276311632091 –AIRWeb: Adversarial Information Retrieval on the Web http://airweb.cse.lehigh.edu/ We can mine the collective intelligence of the web community by seeing how they voted with their links –assumption: when choosing a target for their web page links, people do a good job of filtering out spam, poor quality, etc. –result: your document is influenced by the content of documents of others

19 Want to link “to” a review of DJ Shadow’s “The Outsider”? http://www.google.com/search?q=dj+sh adow+the+outsider+reviewhttp://www.google.com/search?q=dj+sh adow+the+outsider+review –where’s the most knowledgeable review ever on http://f-measure.blogspot.com ???http://f-measure.blogspot.com –class assignment: everyone go home and create 10 pages that link to: http://f- measure.blogspot.com/2009/01/dj-shadow- outsider-lp-review.htmlhttp://f- measure.blogspot.com/2009/01/dj-shadow- outsider-lp-review.html

20 Ranking by Count In-Links >>> reload(searchengine) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('michael nelson') select w0.urlid,w0.location,w1.location from wordlocation w0,wordlocation w1 where w0.wordid=2296 and w0.urlid=w1.urlid and w1.wordid=2297 1.000000http://www.cs.odu.edu/faculty.shtml 0.914286http://www.cs.odu.edu/Grad_List_2004-05.htm 0.885714http://www.cs.odu.edu/recent_presentations.shtml 0.885714http://www.cs.odu.edu/recent_publications.shtml 0.885714http://www.cs.odu.edu/recent_grants.shtml ([2296, 2297], [20, 46, 49, 48, 47]) getscoredlist() weights=[(1.0,self.inboundlinkscore(rows))]

21 But Not All Links Are Equal… You linking to my LP review is nice, but its not as nice as it would be if it were linked to by Spin Magazine, Rolling Stone, MTV, etc. –a page’s “importance” is defined by having other important pages link to it

22 Calculating Pagerank PR(A) = 0.15 + 0.85 * ( PR(B)/links(B) + PR(C)/links(C) + PR(D)/links(D) ) = 0.15 + 0.85 * ( 0.5/4 + 0.7/5 + 0.2/1 ) = 0.15 + 0.85 * ( 0.125 + 0.14 + 0.2) = 0.15 + 0.85 * 0.465 = 0.54525 fig 4-3 needs an extra link from C to match the text “random surfer” model = some guy just following links until he gets bored and randomly jumps to a new page (i.e. arrives via a method other than following links) damping factor (d) =.85 (probability surfer landed on page by following a link) 1-d =.15 (probability surfer landed on page at “random”)

23 Ranking by Pagerank >>> reload(searchengine) >>> import searchengine >>> crawler=searchengine.crawler('mln.db') >>> crawler.calculatepagerank() Iteration 0 Iteration 1 Iteration 2 … Iteration 18 Iteration 19 >>> e=searchengine.searcher('mln.db') >>> e.query('michael nelson') select w0.urlid,w0.location,w1.location from wordlocation w0,wordlocation w1 where w0.wordid=2296 and w0.urlid=w1.urlid and w1.wordid=2297 1.000000http://www.cs.odu.edu/Grad_List_2004-05.htm 0.991401http://www.cs.odu.edu/faculty.shtml 0.984504http://www.cs.odu.edu/recent_presentations.shtml 0.984504http://www.cs.odu.edu/recent_publications.shtml 0.984213http://www.cs.odu.edu/recent_grants.shtml ([2296, 2297], [46, 20, 49, 48, 47]) getscoredlist() weights=[(1.0,self.pagerankscore(rows))] These 2 URLs swapped positions (but just barely) Code in book always stops after 20 iterations; it could stop when threshold is reached

24 Ranking by Link (Anchor) Text >>> reload(searchengine) >>> import searchengine >>> e=searchengine.searcher('mln.db') >>> e.query('nelson') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=2297 Traceback (most recent call last): File " ", line 1, in File "searchengine.py", line 253, in query scores=self.getscoredlist(rows,wordids) File "searchengine.py", line 233, in getscoredlist weights=[(1.0,self.linktextscore(rows,wordids))] File "searchengine.py", line 310, in linktextscore normalizedscores=dict([(u,float(l)/maxscore) for (u,l) in linkscores.items()]) ZeroDivisionError: float division >>> e.query('recent') select w0.urlid,w0.location from wordlocation w0 where w0.wordid=206 1.000000http://www.cs.odu.edu/recent_grants.shtml 0.999989http://www.cs.odu.edu/recent_presentations.shtml 0.999989http://www.cs.odu.edu/recent_publications.shtml 0.000000http://www.cs.odu.edu/faq.shtml 0.000000http://www.cs.odu.edu/facilities.shtml 0.000000http://www.cs.odu.edu/organization.shtml 0.000000http://www.cs.odu.edu/by_laws.shtml 0.000000http://www.cs.odu.edu/student_goals.shtml 0.000000http://www.cs.odu.edu/mission_statement.shtml 0.000000http://www.cs.odu.edu/chairs_welcome.shtml ([206], [47, 49, 48, 63, 62, 61, 60, 59, 58, 57]) getscoredlist() weights=[(1.0,self.linktextscore(rows,wordids))] bad error handling 1st three links have word + the word in a link Others have the word, in the page; but the page does not have the word in a link text

25 Much, Much More A million variations, optimizations, analyses, etc. for Pagerank –http://scholar.google.com/scholar?q=pagerankhttp://scholar.google.com/scholar?q=pagerank –Problems: preferential attachment (rich get richer), “random surfer model” is not accurate (cf. “search dominant model”), etc. see paper by Cho et al.: Page Quality: In Search of an Unbiased Web Ranking –http://oak.cs.ucla.edu/~cho/papers/cho-quality-long.pdfhttp://oak.cs.ucla.edu/~cho/papers/cho-quality-long.pdf Impact of Web Search Engines on Page Popularity –http://oak.cs.ucla.edu/~cho/papers/cho-bias.pdfhttp://oak.cs.ucla.edu/~cho/papers/cho-bias.pdf Alternates to Pagerank –ex: Kleinberg’s Hubs and Authorities http://en.wikipedia.org/wiki/Hubs_and_authorities

26 Voting With Your Clicks Counting links mines what page authors do, but what about mining what readers click on? –the holy grail for advertisers –more privacy concerns than I could ever hope to cover… –one minor, nasty little detail: usage data is notoriously hard to get…

27 Neural Networks Mapping queries (world, river, bank) to URLs (World Bank, River, Earth). The hidden layer is unknown to us; trying to model a user’s cognitive powers.

28 Building Our NN >> import nn >>> mynet=nn.searchnet('nn.db') >>> mynet.maketables() >>> wWorld,wRiver,wBank =101,102,103 >>> uWorldBank,uRiver,uEarth =201,202,203 >>> mynet.generatehiddennode([wWorld,wBank],[uWorldBank,uRiver,uEarth]) >>> mynet.getresult([wWorld,wBank],[uWorldBank,uRiver,uEarth]) [0.076012508375416163, 0.076012508375416163, 0.076012508375416163] >>> >>> mynet.trainquery([wWorld,wBank],[uWorldBank,uRiver,uEarth],uWorldBank) >>> mynet.getresult([wWorld,wBank],[uWorldBank,uRiver,uEarth]) [0.33506324671253307, 0.055127057492087995, 0.055127057492087995] Without training all outcomes equally likely With training, “world bank” query more likely to map to “WorldBank” URL

29 More Training >>> allurls=[uWorldBank,uRiver,uEarth] >>> for i in range(30):... mynet.trainquery([wWorld,wBank],allurls,uWorldBank)... mynet.trainquery([wRiver,wBank],allurls,uRiver)... mynet.trainquery([wWorld],allurls,uEarth)... >>> mynet.getresult([wWorld,wBank],allurls) [0.86154797717394405, 0.01107121517146442, 0.015725794221216588] >>> mynet.getresult([wRiver,wBank],allurls) [-0.030344006191459792, 0.88298149804489123, 0.0055095622708861269] >>> mynet.getresult([wBank],allurls) [0.86540476120703247, -0.00067859116915916105, -0.85191567250806755] We’ve never seen just a “Bank” query, but we can predict the results Neat. Book provides mechanism to include nnscore() in weights dict. But where does training data come from?

30 Another Connectionist Example: Hebbian Learning Not in book, but Bollen & Nelson did some research on using Hebbian Learning & “smart objects” for real-time link adjustment: “Distributed, Real-Time Computation of Community Preferences”, HT 2005 “Dynamic Linking of Smart Digital Objects Based on User Navigation Patterns”, cs.DL/0401029 “Adaptive Network of Smart Objects”, ICCP 2002 Promising, but notable cold-start problems


Download ppt "Collective Intelligence Week 3: Crawling, Searching, Ranking Old Dominion University Department of Computer Science CS 795/895 Spring 2009 Michael L. Nelson."

Similar presentations


Ads by Google