A (corny) ending. 2 Course Outcomes After this course, you should be able to answer: –How search engines work and why are some better than others –Can.

Slides:



Advertisements
Similar presentations
Data Mining and the Web Susan Dumais Microsoft Research KDD97 Panel - Aug 17, 1997.
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Text mining Extract from various presentations: Temis, URI-INIST-CNRS, Aster Data …
Given two randomly chosen web-pages p 1 and p 2, what is the Probability that you can click your way from p 1 to p 2 ? 30%?. >50%?, ~100%? (answer at the.
Information Integration + a (corny) ending 5/4 An unexamined life is not worth living.. --Socrates  Mandatory blog qns  Final on next Tuesday 9:50—11:40.
Search Engines and Information Retrieval
CS 345A Data Mining Lecture 1
CS 345A Data Mining Lecture 1 Introduction to Web Mining.
Interactive Review + a (corny) ending 12/05  Project due today (with extension)  Homework 4 due Friday  Demos (to the TA) as scheduled.
The Last Lecture Agenda –1:40-2:00pm Integrating XML and Search Engines—Niagara way –2:00-2:10pm My concluding remarks (if any) –2:10-2:45pm Interactive.
The PageRank Citation Ranking “Bringing Order to the Web”
Anatomy of a Large-Scale Hypertextual Web Search Engine (e.g. Google)
Information Retrieval in Practice
1 5/4: Final Agenda… 3:15—3:20 Raspberry bars »In lieu of Google IPO shares.. Homework 3 returned; Questions on Final? 3:15--3:40 Demos of student projects.
Query Processing in Data Integration + a (corny) ending
The Web is perhaps the single largest data source in the world. Due to the heterogeneity and lack of structure, mining and integration are challenging.
WebMiningResearch ASurvey Web Mining Research: A Survey By Raymond Kosala & Hendrik Blockeel, Katholieke Universitat Leuven, July 2000 Presented 4/18/2002.
Web Mining Research: A Survey
Web Data Management Dr. Daniel Deutch. Web Data The web has revolutionized our world Data is everywhere Constitutes a great potential But also a lot of.
CS 345 Data Mining Lecture 1 Introduction to Web Mining.
Overview of Web Data Mining and Applications Part I
Chapter 5: Information Retrieval and Web Search
LÊ QU Ố C HUY ID: QLU OUTLINE  What is data mining ?  Major issues in data mining 2.
Clearstorydata.com Using Spark and Shark for Fast Cycle Analysis on Diverse Data Vaibhav Nivargi.
CS598CXZ Course Summary ChengXiang Zhai Department of Computer Science University of Illinois, Urbana-Champaign.
Adversarial Information Retrieval The Manipulation of Web Content.
Research paper: Web Mining Research: A survey SIGKDD Explorations, June Volume 2, Issue 1 Author: R. Kosala and H. Blockeel.
Search Engines and Information Retrieval Chapter 1.
Aardvark Anatomy of a Large-Scale Social Search Engine.
CS523 INFORMATION RETRIEVAL COURSE INTRODUCTION YÜCEL SAYGIN SABANCI UNIVERSITY.
Mining the Semantic Web: Requirements for Machine Learning Fabio Ciravegna, Sam Chapman Presented by Steve Hookway 10/20/05.
X-Informatics Web Search; Text Mining B 2013 Geoffrey Fox Associate Dean for.
Web Data Management Dr. Daniel Deutch. Web Data The web has revolutionized our world Data is everywhere Constitutes a great potential But also a lot of.
Introduction to Web Mining Spring What is data mining? Data mining is extraction of useful patterns from data sources, e.g., databases, texts, web,
WebMining Web Mining By- Pawan Singh Piyush Arora Pooja Mansharamani Pramod Singh Praveen Kumar 1.
Internet Information Retrieval Sun Wu. Course Goal To learn the basic concepts and techniques of internet search engines –How to use and evaluate search.
Chapter 6: Information Retrieval and Web Search
2007. Software Engineering Laboratory, School of Computer Science S E Web-Harvest Web-Harvest: Open Source Web Data Extraction tool 이재정 Software Engineering.
Search Engine Architecture
The Anatomy of a Large-Scale Hyper textual Web Search Engine S. Brin, L. Page Presenter :- Abhishek Taneja.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
Searching Tutorial By: Lola L. Introduction:  When you are using a topic, you might want to use “keyword topics.” Using this might help you find better.
Given two randomly chosen web-pages p 1 and p 2, what is the Probability that you can click your way from p 1 to p 2 ? 30%?. >50%?, ~100%? (answer at the.
CS315-Web Search & Data Mining. A Semester in 50 minutes or less The Web History Key technologies and developments Its future Information Retrieval (IR)
WEB MINING. In recent years the growth of the World Wide Web exceeded all expectations. Today there are several billions of HTML documents, pictures and.
WEB 2.0 PATTERNS Carolina Marin. Content  Introduction  The Participation-Collaboration Pattern  The Collaborative Tagging Pattern.
A Classification-based Approach to Question Answering in Discussion Boards Liangjie Hong, Brian D. Davison Lehigh University (SIGIR ’ 09) Speaker: Cho,
Data Mining Concepts and Techniques Course Presentation by Ali A. Ali Department of Information Technology Institute of Graduate Studies and Research Alexandria.
Mining of Massive Datasets Edited based on Leskovec’s from
Mining Tag Semantics for Social Tag Recommendation Hsin-Chang Yang Department of Information Management National University of Kaohsiung.
The Anatomy of a Large-Scale Hypertextual Web Search Engine S. Brin and L. Page, Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pages , April.
Chapter 8: Web Analytics, Web Mining, and Social Analytics
Presented By: Carlton Northern and Jeffrey Shipman The Anatomy of a Large-Scale Hyper-Textural Web Search Engine By Lawrence Page and Sergey Brin (1998)
Lecture-6 Bscshelp.com. Todays Lecture  Which Kinds of Applications Are Targeted?  Business intelligence  Search engines.
CS570: Data Mining Spring 2010, TT 1 – 2:15pm Li Xiong.
Book web site:
Data mining in web applications
Information Retrieval in Practice
22C:145 Artificial Intelligence
Information Retrieval and Web Search
Personalized Social Image Recommendation
Information Retrieval and Web Search
Office Hours: 1-2pm T/Th 8/23
Course Outcomes After this course, you should be able to answer:
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
1/21/10 Viewing the Coure in terms of IR, DB, Soc Net, ML adapted to web Start of IR.
Web Mining Department of Computer Science and Engg.
Introduction to Information Retrieval
CS246: Information Retrieval
CSE591: Data Mining by H. Liu
Presentation transcript:

a (corny) ending

2 Course Outcomes After this course, you should be able to answer: –How search engines work and why are some better than others –Can web be seen as a collection of (semi)structured databases? If so, can we adapt database technology to Web? –Can useful patterns be mined from the pages/data of the web? What did you think these were going to be?? REVIEW

3 Main Topics Approximately three halves plus a bit: –Information retrieval –Information integration/Aggregation –Information mining –other topics as permitted by time REVIEW

4 Adapting old disciplines for Web-age Information (text) retrieval –Scale of the web –Hyper text/ Link structure –Authority/hub computations Databases –Multiple databases Heterogeneous, access limited, partially overlapping –Network (un)reliability Datamining [Machine Learning/Statistics/Databases] –Learning patterns from large scale data Social Networks REVIEW

Topics Covered 1.Introduction 2.Text retrieval; vectorspace ranking 3.Indexing/Retrieval issues 4.Correlation analysis & Latent Semantic Indexing 5.Search engine technology 6.Social Networks  7.Anatomy of Google etc 8.Clustering 9.Text Classification  (m) 10.Filtering/Personalization 10.Web & Databases: Why do we even care? 11.XML and handling semi- structured data 12.Semantic web and its standards (RDF/RDF- S/OWL...) 13.Information Extraction  14.Data/Information Integration/aggregation 15.Query Processing in Data Integration: Gathering and Using Source Statistics

Topics Covered Clustering (2) Text Classification (2) Filtering/Recommender Systems (1.5) Computational Advertising (1) XML and handling semi- structured data + Semantic Web standards (3) Information Extraction (2.5) Information/data Integration (1.5) Introduction (2) Text retrieval; vectorspace ranking (3) Indexing; tolerant retrieval; (1) Correlation analysis & Latent Semantic Indexing (3) Link Analysis in Web Search (A/H; Pagerank) (4) Crawling; Map/Reduce (2) Social Networks (3)

Finding“Sweet Spots” in computer-mediated cooperative work It is possible to get by with techniques blythely ignorant of semantics, when you have humans in the loop –All you need is to find the right sweet spot, where the computer plays a pre-processing role and presents “potential solutions” –…and the human very gratefully does the in- depth analysis on those few potential solutions Examples: –The incredible success of “Bag of Words” model! Bag of letters would be a disaster ;-) Bag of sentences and/or NLP would be good –..but only to your discriminating and irascible searchers ;-)  Giving “pointers” where results can be found and letting users do the “reading” is okay for simple queries But for aggregate queries, it becomes tiresome  Here read these 700 employee files to figure out the average employee salary.

Collaborative Computing AKA Brain Cycle Stealing AKA Computizing Eyeballs A lot of exciting research related to web currently involves “co-opting” the masses to help with large-scale tasks –It is like “cycle stealing”—except we are stealing “human brain cycles” (the most idle of the computers if there is ever one ;-) Remember the mice in the Hitch Hikers Guide to the Galaxy? (..who were running a mass-scale experiment on the humans to figure out the question..) –Collaborative knowledge compilation (wikipedia!) –Collaborative Curation –Collaborative tagging –Paid collaboration/contracting Many big open issues –How do you pose the problem such that it can be solved using collaborative computing? –How do you “incentivize” people into letting you steal their brain cycles?

Tapping into the Collective Unconscious AKA “Wisdom of the Crowds” Another thread of exciting research is driven by the realization that WEB is not random at all! –It is written by humans –…so analyzing its structure and content allows us to tap into the collective unconscious.. Meaning can emerge from syntactic notions such as “co-occurrences” and “connectedness” Examples: –Analyzing term co-occurrences in the web-scale corpora to capture semantic information (today’s paper) –Analyzing the link-structure of the web graph to discover communities DoD and NSA are very much into this as a way of breaking terrorist cells –Analyzing the transaction patterns of customers (collaborative filtering)

If you don’t take Autonomous/Adversarial Nature of the Web into account, then it is gonna getcha.. Most “first-generation” ideas of web make too generous an assumption of the “good intentions” of the source/page/ creators. The reasonableness of this assumption is increasingly going to be called into question as Web evolves in an uncontrolled manner… Controlling creation rights removes the very essence of scalability of the web. Instead we have to factor in adversarial nature.. –Links can be manipulated to change page importance So we need “trust rank” –Fake annotations can be added to pages and images So we need ESP-game like self-correcting annotations.. –Fake/spam mails can be sent (and the nature of the spam mails can be altered to defeat simple spam classifiers…) So we need adversarial classification techniques –Sources may export untrustworthy/made-up data So we need SourceRank?

Interactive Review

Anatomy may be likened to a harvest-field. First come the reapers, who, entering upon untrodden ground, cut down great store of corn from all sides of them. These are the early anatomists of Europe Then come the gleaners, who gather up ears enough from the bare ridges to make a few loaves of bread. Such were the anatomists of last. Last of all come the geese, who still contrive to pick up a few grains scattered here and there among the stubble, and waddle home in the evening, poor things, cackling with joy because of their success. Gentlemen, we are the geese. --John Barclay English Anatomist

Information Integration on Web still rife with uncut corn Unlike anatomy of Barclay’s day, Web is still young. We are just figuring out how to tap its potential …You have great stores of uncut corn in front of you. ……