Download presentation
Presentation is loading. Please wait.
1
Recent Trends in Text Mining Girish Keswani gkeswani@micron.com
2
Text Mining? What? What? Data Mining on Text Data Data Mining on Text Data Why? Why? Information Retrieval Information Retrieval Confusion Set Disambiguation Confusion Set Disambiguation Topic Distillation Topic Distillation How? How? Data Mining Data Mining
3
Organization Text Mining Algorithms Text Mining Algorithms Jargon Used Jargon Used Background Background Data Modeling, Data Modeling, Text Classification, and Text Classification, and Text Clustering Text Clustering Applications Applications Experiments {NBC, NN and ssFCM} Experiments {NBC, NN and ssFCM} Further work Further work References References
4
Text Mining Algorithms Classification Algorithms Classification Algorithms Naïve Bayes Classifier Naïve Bayes Classifier Decision Trees Decision Trees Neural Networks Neural Networks Clustering Algorithms Clustering Algorithms EM Algorithms EM Algorithms Fuzzy Fuzzy
5
Jargon DM: Data Mining DM: Data Mining IR: Information Retrieval IR: Information Retrieval NBC: Naïve Bayes Classifier NBC: Naïve Bayes Classifier EM: Expectation Maximization EM: Expectation Maximization NN: Neural Networks NN: Neural Networks ssFCM: Semi-Supervised Fuzzy C- Means ssFCM: Semi-Supervised Fuzzy C- Means Labeled Data (Training Data) Labeled Data (Training Data) Unlabeled Data Unlabeled Data Test Data Test Data
6
Background: Modeling Vector Space Model Vector Space Model
7
Background: Modeling Generative Models of Data [13] : Probabilistic Generative Models of Data [13] : Probabilistic “to generate a document, a class is first selected based on its prior probability and then a document is generated using the parameters of the chosen class distribution” NBC and EM Algorithms are based on this model NBC and EM Algorithms are based on this model
8
Importance of Unlabeled Data? A D B E F C G Provides access to feature distribution in set F using joint probability distributions Labeled Data Unlabeled Data Test Data
9
How to make use of Unlabeled Data?
11
Experimental Results [1] Using NBC, EM and ssFCM
12
Experimental Results [2] Using NBC and EM
13
Extensions and Variants of these approaches Authors in [6] propose a concept of Class Distribution Constraint matrix Authors in [6] propose a concept of Class Distribution Constraint matrix Results on Confusion Set Disambiguation Results on Confusion Set Disambiguation Automatic Title Generation [7]: Automatic Title Generation [7]: Using EM Algorithm Using EM Algorithm Non-extractive approach Non-extractive approach
14
Relational Data [9] A collection of data with relations between entities explained is known as relational data A collection of data with relations between entities explained is known as relational data Probabilistic Relational Models Probabilistic Relational Models
15
Commercial Use/Products IBM Text Analyzer [11] IBM Text Analyzer [11] Decision Tree Based Decision Tree Based SAS Text Miner[12] SAS Text Miner[12] Singular Value Decomposition Singular Value Decomposition Filtering Junk Email Filtering Junk Email Hotmail, Yahoo Hotmail, Yahoo Advanced Search Engines Advanced Search Engines
16
Applications: Search Engines
17
Vivisimo Search Engine: (www.vivisimo.com)
18
Experiments NBC NBC Naïve Bayes Classifier Naïve Bayes Classifier Probabilistic Probabilistic NN NN Neural Networks Neural Networks ssFCM ssFCM Semi-Supervised Fuzzy Clustering Semi-Supervised Fuzzy Clustering Fuzzy Fuzzy
19
Datasets (20 Newsgroups Data) Sampling I: Sampling I: Sampling II: Sampling II: Datasetmin2min4min6 # Features--94675685 DatasetSampling PercentageNumber of Features Sample2525%13925 Sample3030%15067 Sample3535%16737 Sample4040%16871 Sample4545%17712 Sample5050%19135 Data Vectors Raw Sampling I Sampling II Vectors
20
Naïve Bayes Classifier SAMPLE% TRAINING% TESTACCURACY % Sample25 208034.4637 633648.4945 762350.9322 821747.7728 861348.9971 208031.5436 633648.0729 762347.8661 821750.5568 861350.4587 Sample30 336639.1137 663346.4233 772248.5528 831652.7383 861351.2136 336639.26 663347.0192 772248.8439 831649.6907 861351.6169
21
Naïve Bayes Classifier
22
NBC Sample25Sample30
23
ssFCM Effect of Labeled DataEffect of Unlabeled Data
24
ssFCM
25
Further Work Ensemble of Classifiers [16] Ensemble of Classifiers [16]
26
Further Work Knowledge Gathering from Experts Knowledge Gathering from Experts E.g. 3 class Data: E.g. 3 class Data: C1 C2 C3 Input Data {C1,C2,C3} Test Data ? Classifier
27
References [1] “Text Classification using Semi-Supervised Fuzzy Clustering,” Girish Keswani and L.O.Hall, appeared in IEEE WCCI 2002 conference. [2] “Using Unlabeled Data to Improve Text Classification,” Kamal Paul Nigam. [3] “Text Classification from Labeled and Unlabeled Documents using EM,” Kamal Paul Nigam et al. [4] “The Value of Unlabeled Data for Classification Problems,” Tong Zhang. [5] “Learning from Partially Labeled Data,” Martin Szummer et al. [6] “Training a Naïve Bayes Classifier via the EM Algorithm with a Class Distribution Constraint,” Yoshimasa Tsuruoka and Jun’ichi Tsujii. [7] “Automatic Title Generation using EM,” Paul E. Kennedy and Alexander G. Hauptmann. [8] “Unlabeled Data can degrade Classification Performance of Generative Classifiers,” Fabio G. Cozman and Ira Cohen. [9] “Probabilistic Classification and Clustering in Relational Data,” Ben Taskar et al. [10] “Using Clustering to Boost Text Classification,” Y.C. Fang et al. [11] IBM Text Analyzer: “A decision-tree-based symbolic rule induction system for text categorization,” D.E. Johnson et al. [12] “SAS Text Miner,” Reincke [13] “Pattern Recognition,” Duda and Hart 2000 [14] “Machine Learning,” Tom Mitchell [15] “Data Mining,” Margaret Dunham [16] http://www-2.cs.cmu.edu/afs/cs/project/jair/pub/volume11/opitz99a-html/
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.