Download presentation
Presentation is loading. Please wait.
Published byMelvin Potter Modified over 9 years ago
1
Text Mining Text Classification Text ClusteringText Mining Text Classification Text Clustering 2004. 11.
2
Contents Introduction Introduction Related Technologies Related Technologies Feature selection Feature selection Text classification Text classification Text clustering Text clustering
3
Introduction (1/2) Text classification (categorization) Text classification (categorization) Sorting new items into existing structures Sorting new items into existing structures general topic hierarchies general topic hierarchies email folders email folders general file system general file system Information filtering/push Information filtering/push Mail filtering(spam vs.not) Mail filtering(spam vs.not) Customized Push service Customized Push service
4
Categorization Document Categorization
5
Clustering (topic discovery) Document Clustering
6
Introduction (2/2) Difference with data mining Difference with data mining Analyze both raw data and textual information at the same time Analyze both raw data and textual information at the same time Require complicated feature selection technologies Require complicated feature selection technologies May include linguistic, lexical, and contextual techniques May include linguistic, lexical, and contextual techniques
7
Classifier unknown documents sample documents 1. learning 2. classification A. fun B. business C.private Text classification 예 : e-mail
8
Process Construction of vocabulary Construction of vocabulary optional optional Extraction Extraction Keep incoming documents in the system Keep incoming documents in the system Parsing Parsing Stemming Stemming Vector model, bag-of-words Vector model, bag-of-words Feature selection (reduction) Feature selection (reduction) Learning Learning Off-line process: Build model parameters Off-line process: Build model parameters Categorization Categorization On-line process On-line process Re-learning Re-learning On-line process On-line process
9
Extraction Databases Databases Documents Documents Incoming/Training/Categorized documents Incoming/Training/Categorized documents Dictionary Dictionary Stopwords Stopwords 조사, 어미 조사, 어미 …
10
Stemming Table Lookup Table Lookup 검색어와 관련된 모든 어간을 테이블 기록 검색어와 관련된 모든 어간을 테이블 기록 N-gram stemmer N-gram stemmer 접사 제거 접사 제거 어근 추출을 위해 접두사, 접미사, 어미 등을 제거 어근 추출을 위해 접두사, 접미사, 어미 등을 제거 Porter 알고리즘 Porter 알고리즘 ‘ies’ ‘y’ ‘ies’ ‘y’ ‘es’ ‘s’ ‘es’ ‘s’ ‘s’ NULL ‘s’ NULL
11
Extraction 색인어 색인어 주로 명사 ( 구 ) 주로 명사 ( 구 ) 그외 형용사 ( 구 ), 동사 ( 구 ) 그외 형용사 ( 구 ), 동사 ( 구 ) 색인방법 색인방법 통계적 기법 통계적 기법 단어 출현 통계량 (term frequency) 사용 단어 출현 통계량 (term frequency) 사용 언어학적 기법 언어학적 기법 형태소 분석, 구문 분석 형태소 분석, 구문 분석
12
Extraction 한글 문서의 특징 한글 문서의 특징 띄어쓰기가 자유로움 띄어쓰기가 자유로움 복합명사 분해 문제 복합명사 분해 문제 대학생선교회 대학 + 생선 + 교회 or 대학생 + 선교회 대학생선교회 대학 + 생선 + 교회 or 대학생 + 선교회 용언의 변화, 축약 용언의 변화, 축약 음절 분석 필요 음절 분석 필요 불용어 처리 불용어 처리 맞춤법 처리 맞춤법 처리 색인어로 적당한 한글의 격틀 색인어로 적당한 한글의 격틀 명사 : ex) 정보 명사 : ex) 정보 명사 + 명사 : ex) 정보검색 명사 + 명사 : ex) 정보검색 명사 + 조사 + 명사 : ex) 정보의 검색 명사 + 조사 + 명사 : ex) 정보의 검색
13
Feature Selection (reduction) : Curse of dimensionality Removal of stopwords Removal of stopwords Feature Selection Feature Selection Zipf’s Law Zipf’s Law DF (document frequency)-based DF (document frequency)-based x 2 Statistics-based x 2 Statistics-based Mutual Information Mutual Information …
14
Feature Selection (reduction) : Curse of dimensionality Stopwords Stopwords
15
Feature Selection (reduction) : Curse of dimensionality Zipf’s Law Zipf’s Law
16
Feature Selection (reduction) : Curse of dimensionality x 2 statistics-based x 2 statistics-basedC/Ctpr /tqs
17
Parsing: representing documents Vector Representation - term frequency - document frequency - weights
18
Classification Model : machine learning approach Learner Classifier Observed Training documents Unknown documents Model(hypothesis) Parameters Categorized documents
19
Classification Model : machine learning approach Na ï ve Bayesian Classification Na ï ve Bayesian Classification Nearest Neighbor Classification Nearest Neighbor Classification q
20
Classification 예 환자본인의 유전자를 이용, 배아를 만든 후 이를 이용해 실험실에서 건강한 세포를 배양시켜 환자에 다시 주입하는 이른바 치료복제법이 실험을 통해 입증되기는 이번이 세계최초라고 연구진은 주장했는데 이 방법은 주입된 세포에 대한 인체의 거부 반응이 없어 그동안 의학계의 관심을 끌어왔다 환자 본인 환자본인 유전자 이용 배아 이용 실험실 건강 세포 배양 환자 주입 치료복제법 실험 입증 이번 세계 최초 세계최초 연구진 주장 방법 주입 세포 인체 거부 반응 의학계 관심 수의학 0.191149 의학, 생명공학, 약학 0.134847 치의학 0.114641 생물, 미생물 0.109833 성 0.099062 질병, 증상, 죽음 0.084554...
21
Learning the text classifier Before system starts Before system starts Define category (class, topic) Define category (class, topic) Learning representative documents for each defined category Learning representative documents for each defined category During system operation During system operation Incremental learning for each classifier Incremental learning for each classifier Define new categories by clustering uncategorized documents Define new categories by clustering uncategorized documents
22
Machine Learning based approach (Basic architecture)
23
Machine Learning Methods Similarity-based Similarity-based K-Nearest Neighbor K-Nearest Neighbor Decision Trees Decision Trees Statistical Learning: Statistical Learning: Naïve Bayes Naïve Bayes Bayes Nets Bayes Nets Support Vector Machines Support Vector Machines Artificial Neural Networks Artificial Neural Networks...... Others Others Hierarchical classification Hierarchical classification Expectation-Maximization technique Expectation-Maximization technique Variants of Boosting Variants of Boosting Active learning Active learning
24
Na ï ve Bayes Text Classifier Classification model of NB classifiers - Class prior estimate - Word probability estimate Class prior estimateWord probability estimate
25
Uses of Clustering in IR Clustering as Representation (abstraction) Clustering as Representation (abstraction) Clustering is unsupervised learning Clustering is unsupervised learning of the underlying structure, classes of the underlying structure, classes Clustering can be used to transform representations Clustering can be used to transform representations documents are represented by class membership as well as individual terms documents are represented by class membership as well as individual terms Can be viewed as dimensionality reduction Can be viewed as dimensionality reduction especially term clustering (e.g., word variant clusters) especially term clustering (e.g., word variant clusters) Clustering for Browsing Clustering for Browsing Clustering has been proposed as a technique for organizing documents for browsing, interaction and visualization Clustering has been proposed as a technique for organizing documents for browsing, interaction and visualization constructing hypertext constructing hypertext clustering the results of searches clustering the results of searches iterative clustering of the collection (e.g, Scatter/Gather) iterative clustering of the collection (e.g, Scatter/Gather) clustering the web clustering the web Also has been used to group terms for browsing Also has been used to group terms for browsing automatic thesauri automatic thesauri topic summaries topic summaries Clustering for topic discovery Clustering for topic discovery
26
Introduction Text clustering Text clustering Summarization of large text data Summarization of large text data Discovering new categories Discovering new categories
27
Abstraction of a set of documents Document within a cluster “ relevant ”
28
Information Retrieval (browsing) Clustering of Query Results (Scatter/Gather) Clustering of Query Results (Scatter/Gather) Scatter & Gather
29
Clustering for Topic discovery (Evolution of topic hierarchy) “Movie & Film”... A “Movie & Film” “Plays” “Film Festivals”... “Screen Plays” “Movie” “Genres” “Film Festival”... “Horror” “Science Fiction” B Reorganization New topic discovery Concep t drift Change of viewpoin t
30
Clustering Algorithm Two general methodologies Two general methodologies Hierarchical Hierarchical pairs of items or clusters are successively linked to produce larger clusters (agglomerative) pairs of items or clusters are successively linked to produce larger clusters (agglomerative) or start with the whole set as a cluster and successively divide sets into smaller partitions (divisive) or start with the whole set as a cluster and successively divide sets into smaller partitions (divisive) Non-hierarchical - divide a set of N items into M clusters (top-down) Non-hierarchical - divide a set of N items into M clusters (top-down) Graph Graph partitioning partitioning
31
Clusters Supervised Clustering (for Topic Discovery) Clustering Document Collection A’B’C’D’E’ Human Knowledge Topics (categories)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.