Classification and clustering methods development and implementation for unstructured documents collections by Osipova Nataly St.Petesburg State University Faculty of Applied Mathematics and Control Processes Department of Programming Technology
Contents Introduction Methods description Information Retrieval System Experiments
Contextual Document Clustering was developed in joined project of Applied Mathematics and Control Processes Faculty, St. Petersburg State University and Northern Ireland Knowledge Engineering Laboratory (NIKEL), University of Ulster.
Definitions Document Terms dictionary Dictionary Cluster Word context Context or document conditional probability distribution Entropy
Document conditional probability distribution Document x y word1 word2 word3 … wordn tf(y) p(y|x) 5/m 10/m 6/m 16/m y – words tf(y) – y frequency p(y|x) – y conditional probability in document x m – document x size (5/m, 10/m,6/m,…,16/m ) – document conditional probability distribution
Word context Word w Document x1Document x2Document xk y word1 word2 … wordn1 tf(y) p(y|x1) 5/m1 10/m1 16/m1 y word1 word3 … wordn2 tf(y) p(y|x1) 7/m1 12/m1 4/m1 y word1 word4 … wordnk tf(y) p(y|x1) 20/mk 9/mk 3/mk … y word1 word2 word3 … wordnk tf(y) = p(y|w) 32/m 10/m 12/m 3/m … Context conditional probability distribution
Contents Introduction Methods description Information Retrieval System Experiments
Methods document clustering method dictionary build methods document classification method using training set Information retrieval methods: keyword search method cluster based search method similar documents search method
Contextual Documents Clustering Documents DictionaryNarrow context words Clusters Distances calculation
Entropy p1 pn p2 y context conditional probability distribution p1+p2+…+pn=1 p1 pn p2 Uncertainly measure, here it is used to characterize commonness (narrowness) of the word context.
Contextual Document Clustering maxH(y)=H ()
Entropy α H() ) )
Word Context - Document Distance y context conditional probability distribution Document x conditional probability distribution Average conditional probability distribution
Word Context - Document Distance JS[p1,p2]=H( ) - 0.5H() )
Jensen-Shannon divergence
Dictionary construction Why: - big volumes: 60,000 documents, 50,000 words => 15,000 words in a context - narrow context words importance
Dictionary construction Delete words with 1. High or low frequency 2. High or low document frequency and 2.
Retrieval algorithms keyword search method cluster based search method search by example method
Keyword search method Document 1 word 1 word 2 word 3 … word n1 Document 2 word 10 word 25 word 30 … word n2 Document 3 word 15 word 2 word 32 … word n3 Document 4 word 11 word 21 word 3 … word n4 Request: word 2Result set: document 1 document3
Cluster based search method Documents Cluster 3 word 1 word 23 … word n3 Documents Cluster 2 word 12 word 26 … word n2 Cluster 1 word 1 word 2 … word n1 Cluster context words Request: word 1Result set: Cluster 1 Cluster 3
Similar documents search document 1Cluster name Cluster Minimal Spanning Tree document 2 document 3 document 4 document 5 document 6 document 7 Request: document 3Result set: document 6 document 7
Document classification: method 1 Clusters List of topics Training set Topics contexts Distances between topics and clusters contexts Classification result: cluster1 – topic 10 cluster 2 – topic 3 … cluster n – topic 30 Test documents
Clusters Topics list Training set Classification result: cluster1 – topic 10 cluster 2 – topic 3 … cluster n – topic 30 Document classification: method 2 Test documents All documents set
Contents Introduction Methods description Information Retrieval System Experiments
Information Retrieval System Architecture Features Use
Information Retrieval System architecture. data base server client
IRS architecture Data Base Data Base Server MS SQL Server 2000 Local Area Network Local Area Network “thick” client C#
IRS architecture DBMS MS SQL Server 2000: High-performance Scalable Secure Huge volumes of data treat T/SQL Stored procedures
IRS features In the IRS the following problems are solved: document clustering keyword search method cluster based search method similar documents search method document classification with the use of training set
DB structure The Data Base of the IRS consists of the following tables: documents all words dictionary dictionary table of relations between documents and words: document-word words contexts words with narrow contexts clusters intermediate tables for main tables build and for retrieve realization
DictionaryDocuments Table “document-word” Words contexts ClustersCentroid Cluster based search Keyword search Words with narrow contexts All words dictionary Similar documents search Algorithms implementation
document1 document2 document5document3 document4 Cluster 0, , , , , ,211 0,87310,7231 0,1011 Similar documents search
Minimal Spanning Tree document 1Cluster name Cluster document 2 document 3 document 4 document 5
Similar documents search Clusters table Tree table Distances table Similar documents search
IRS use
Contents Introduction Methods description Information Retrieval System Experiments
Test goals were: algorithm accuracy test different classification methods comparison algorithm efficiency evaluation
Experiments 60,000 documents 100 topics Training set volume = 5% of the collection size
Experiments
Result analysis - Russian Information Retrieval Evaluation Seminar - Such measures as macro-average recall precision F-measure were calculated.
Recall
Precision
F-measure
Result analysis List of some topics test documents were classified in № Category 1 Family law 2 Inheritance law 3 Water industry 4 Catering 5 Inhabitants’ consumer services 6 Rent truck 7 International law of the space 8 Territory in international law 9 Off-economic relations fellows 10 Off-economic dealerships 11 Economy free trade zones. Customs unions.
Result analysis Recall results for every category. Results which were the best for the category are selected with bold type. All results are set in percents. С V textan xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx
Thank you for your attention!