Download presentation
Presentation is loading. Please wait.
Published byMagdalene Ellis Modified over 9 years ago
1
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine ICS 278: Data Mining Lecture 14: Document Clustering and Topic Extraction Note: many of the slides on topic models were adapted from the presentation by Griffiths and Steyvers at the Beckman National Academy of Sciences Symposium on “Mapping Knowledge Domains”, Beckman Center, UC Irvine, May 2003. Padhraic Smyth Department of Information and Computer Science University of California, Irvine
2
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Text Mining Information Retrieval Text Classification Text Clustering Information Extraction
3
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Document Clustering Set of documents D in term-vector form –no class labels this time –want to group the documents into K groups or into a taxonomy –Each cluster hypothetically corresponds to a “topic” Methods: –Any of the well-known clustering methods –K-means E.g., “spherical k-means”, normalize document distances –Hierarchical clustering –Probabilistic model-based clustering methods e.g., mixtures of multinomials Single-topic versus multiple-topic models –Extensions to author-topic models
4
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Mixture Model Clustering
5
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Mixture Model Clustering
6
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Mixture Model Clustering Conditional Independence model for each component (often quite useful to first-order)
7
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Mixtures of Documents 1111 11111 1 111 1 11 111 1 1 1 11 1 1 1 111 1 1 1 1 1 1 1 111 1 1 1 Terms Documents 1 1 1 1 Component 1 Component 2
8
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 1111 11111 1 111 1 11 111 1 1 1 11 1 1 1 111 1 1 1 1 1 1 1 111 1 1 1 Terms Documents 1 1 1 1
9
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 1111 11111 1 111 1 11 111 1 1 1 11 1 1 1 111 1 1 1 1 1 1 1 111 1 1 1 Terms Documents C1 C2 1 1 1 1 Treat as Missing C2
10
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 1111 11111 1 111 1 11 111 1 1 1 11 1 1 1 111 1 1 1 1 1 1 1 111 1 1 1 Terms Documents C1 C2 1 1 1 1 Treat as Missing P(C1|x1) P(C1|..) P(C2|x1) P(C2|..) E-Step: estimate component membership probabilities given current parameter estimates
11
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 1111 11111 1 111 1 11 111 1 1 1 11 1 1 1 111 1 1 1 1 1 1 1 111 1 1 1 Terms Documents C1 C2 1 1 1 1 Treat as Missing P(C1|x1) P(C1|..) P(C2|x1) P(C2|..) M-Step: use “fractional” weighted data to get new estimates of the parameters
12
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A Document Cluster Most Likely Terms in Component 5: weight = 0.08 TERM p(t|k) write 0.571 drive 0.465 problem 0.369 mail 0.364 articl 0.332 hard 0.323 work 0.319 system 0.303 good 0.296 time 0.273 Highest Lift Terms in Component 5 weight = 0.08 TERM LIFT p(t|k) p(t) scsi 7.7 0.13 0.02 drive 5.7 0.47 0.08 hard 4.9 0.32 0.07 card 4.2 0.23 0.06 format 4.0 0.12 0.03 softwar 3.8 0.21 0.05 memori 3.6 0.14 0.04 install 3.6 0.14 0.04 disk 3.5 0.12 0.03 engin 3.3 0.21 0.06
13
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Another Document Cluster Most Likely Terms in Component 1 weight = 0.11 : TERM p(t|k) articl 0.684 good 0.368 dai 0.363 fact 0.322 god 0.320 claim 0.294 apr 0.279 fbi 0.256 christian 0.256 group 0.239 Highest Lift Terms in Component 1: weight = 0.11 : TERM LIFT p(t|k) p(t) fbi 8.3 0.26 0.03 jesu 5.5 0.16 0.03 fire 5.2 0.20 0.04 christian 4.9 0.26 0.05 evid 4.8 0.24 0.05 god 4.6 0.32 0.07 gun 4.2 0.17 0.04 faith 4.2 0.12 0.03 kill 3.8 0.22 0.06 bibl 3.7 0.11 0.03
14
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A topic is represented as a (multinomial) distribution over words SPEECH.0691 WORDS.0671 RECOGNITION.0412 WORD.0557 SPEAKER.0288 USER.0230 PHONEME.0224 DOCUMENTS.0205 CLASSIFICATION.0154 TEXT.0195 SPEAKERS.0140 RETRIEVAL.0152 FRAME.0135 INFORMATION.0144 PHONETIC.0119 DOCUMENT.0144 PERFORMANCE.0111 LARGE.0102 ACOUSTIC.0099 COLLECTION.0098 BASED.0098 KNOWLEDGE.0087 PHONEMES.0091 MACHINE.0080 UTTERANCES.0091 RELEVANT.0077 SET.0089 SEMANTIC.0076 LETTER.0088 SIMILARITY.0071 … … Example topic #1 Example topic #2
15
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine The basic model…. C X1X1 X2X2 XdXd
16
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A better model…. A X1X1 X2X2 XdXd B C
17
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A better model…. A X1X1 X2X2 XdXd B C History: - latent class models in statistics - Hofmann applied to documents (SIGIR ’99) - recent extensions, e.g., Blei, Jordan, Ng (JMLR, 2003) - variously known as factor/aspect/latent class models
18
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A better model…. A X1X1 X2X2 XdXd B C Inference can be intractable due to undirected loops!
19
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A better model for documents…. Multi-topic model –A document is generated from multiple components –Multiple components can be active at once –Each component = multinomial distribution –Parameter estimation is tricky –Very useful: “parses” into high-level semantic components
20
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine History of multi-topic models Latent class models in statistics Hoffman 1999 –Original application to documents Blei, Ng, and Jordan (2001, 2003) –Variational methods Griffiths and Steyvers (2003) –Gibbs sampling approach (very efficient)
21
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 1 2 3 4 GROUP 0.057185 DYNAMIC 0.152141 DISTRIBUTED 0.192926 RESEARCH 0.066798 MULTICAST 0.051620 STRUCTURE 0.137964 COMPUTING 0.044376 SUPPORTED 0.043233 INTERNET 0.049499 STRUCTURES 0.088040 SYSTEMS 0.038601 PART 0.035590 PROTOCOL 0.041615 STATIC 0.043452 SYSTEM 0.031797 GRANT 0.034476 RELIABLE 0.020877 PAPER 0.032706 HETEROGENEOUS 0.030996 SCIENCE 0.023250 GROUPS 0.019552 DYNAMICALLY 0.023940 ENVIRONMENT 0.023163 FOUNDATION 0.022653 PROTOCOLS 0.019088 PRESENT 0.015328 PAPER 0.017960 FL 0.021220 IP 0.014980 META 0.015175 SUPPORT 0.016587 WORK 0.021061 TRANSPORT 0.012529 CALLED 0.011669 ARCHITECTURE 0.016416 NATIONAL 0.019947 DRAFT 0.009945 RECURSIVE 0.010145 ENVIRONMENTS 0.013271 NSF 0.018116 “Content” components “Boilerplate” components
22
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 5 6 7 8 DIMENSIONAL 0.038901 RULES 0.090569 ORDER 0.192759 GRAPH 0.095687 POINTS 0.037263 CLASSIFICATION 0.062699 TERMS 0.048688 PATH 0.061784 SURFACE 0.031438 RULE 0.062174 PARTIAL 0.044907 GRAPHS 0.061217 GEOMETRIC 0.025006 ACCURACY 0.028926 HIGHER 0.041284 PATHS 0.030151 SURFACES 0.020152 ATTRIBUTES 0.023090 REDUCTION 0.035061 EDGE 0.028590 MESH 0.016875 INDUCTION 0.021909 PAPER 0.028602 NUMBER 0.022775 PLANE 0.013902 CLASSIFIER 0.019418 TERM 0.018204 CONNECTED 0.016817 POINT 0.013780 SET 0.018303 ORDERING 0.017652 DIRECTED 0.014405 GEOMETRY 0.013780 ATTRIBUTE 0.016204 SHOW 0.017022 NODES 0.013625 PLANAR 0.012385 CLASSIFIERS 0.015417 MAGNITUDE 0.015526 VERTICES 0.013554 9 10 11 12 INFORMATION 0.281237 SYSTEM 0.143873 PAPER 0.077870 LANGUAGE 0.158786 TEXT 0.048675 FILE 0.054076 CONDITIONS 0.041187 PROGRAMMING 0.097186 RETRIEVAL 0.044046 OPERATING 0.053963 CONCEPT 0.036268 LANGUAGES 0.082410 SOURCES 0.029548 STORAGE 0.039072 CONCEPTS 0.033457 FUNCTIONAL 0.032815 DOCUMENT 0.029000 DISK 0.029957 DISCUSSED 0.027414 SEMANTICS 0.027003 DOCUMENTS 0.026503 SYSTEMS 0.029221 DEFINITION 0.024673 SEMANTIC 0.024341 RELEVANT 0.018523 KERNEL 0.028655 ISSUES 0.024603 NATURAL 0.016410 CONTENT 0.016574 ACCESS 0.018293 PROPERTIES 0.021511 CONSTRUCTS 0.014129 AUTOMATICALLY 0.009326 MANAGEMENT 0.017218 IMPORTANT 0.021370 GRAMMAR 0.013640 DIGITAL 0.008777 UNIX 0.016878 EXAMPLES 0.019754 LISP 0.010326
23
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine 13 14 15 16 MODEL 0.429185 PAPER 0.050411 TYPE 0.088650 KNOWLEDGE 0.212603 MODELS 0.201810 APPROACHES 0.045245 SPECIFICATION 0.051469 SYSTEM 0.090852 MODELING 0.066311 PROPOSED 0.043132 TYPES 0.046571 SYSTEMS 0.051978 QUALITATIVE 0.018417 CHANGE 0.040393 FORMAL 0.036892 BASE 0.042277 COMPLEX 0.009272 BELIEF 0.025835 VERIFICATION 0.029987 EXPERT 0.020172 QUANTITATIVE 0.005662 ALTERNATIVE 0.022470 SPECIFICATIONS 0.024439 ACQUISITION 0.017816 CAPTURE 0.005301 APPROACH 0.020905 CHECKING 0.024439 DOMAIN 0.016638 MODELED 0.005301 ORIGINAL 0.019026 SYSTEM 0.023259 INTELLIGENT 0.015737 ACCURATELY 0.004639 SHOW 0.017852 PROPERTIES 0.018242 BASES 0.015390 REALISTIC 0.004278 PROPOSE 0.016991 ABSTRACT 0.016826 BASED 0.014004 “Style” components
24
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A generative model for documents Each document a mixture of topics Each word chosen from a single topic from parameters (Blei, Ng, & Jordan, 2003)
25
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A generative model for documents Called Latent Dirichlet Allocation (LDA) Introduced by Blei, Ng, and Jordan (2003), reinterpretation of PLSI (Hofmann, 2001) z w z z ww
26
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine words documents U D V words dims vectors documents SVD words documents words topics documents LDA P(w|z)P(w|z) P(z)P(z) P(w)P(w) (Dumais, Landauer)
27
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine A generative model for documents HEART0.2 LOVE0.2 SOUL0.2 TEARS0.2 JOY0.2 SCIENTIFIC 0.0 KNOWLEDGE 0.0 WORK 0.0 RESEARCH0.0 MATHEMATICS0.0 HEART0.0 LOVE0.0 SOUL0.0 TEARS0.0 JOY0.0 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH0.2 MATHEMATICS0.2 topic 1topic 2 w P(w|z = 1) = (1) w P(w|z = 2) = (2)
28
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Choose mixture weights for each document, generate “bag of words” = {P(z = 1), P(z = 2)} {0, 1} {0.25, 0.75} {0.5, 0.5} {0.75, 0.25} {1, 0} MATHEMATICS KNOWLEDGE RESEARCH WORK MATHEMATICS RESEARCH WORK SCIENTIFIC MATHEMATICS WORK SCIENTIFIC KNOWLEDGE MATHEMATICS SCIENTIFIC HEART LOVE TEARS KNOWLEDGE HEART MATHEMATICS HEART RESEARCH LOVE MATHEMATICS WORK TEARS SOUL KNOWLEDGE HEART WORK JOY SOUL TEARS MATHEMATICS TEARS LOVE LOVE LOVE SOUL TEARS LOVE JOY SOUL LOVE TEARS SOUL SOUL TEARS JOY
29
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Bayesian inference Sum in the denominator over T n terms Full posterior only tractable to a constant
30
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Bayesian sampling Sample from a Markov chain which converges to the target distribution of interest –Known as Markov chain Monte Carlo in general Simple version is known as Gibbs sampling –Say we are interested in estimating p(x, y | D) –We can approximate this by sampling from p(x|y,D), p(y|x,D) in an iterative fashion –Useful when conditionals are known, but joint distribution is not easy to work with –Converges to true distribution under fairly broad assumptions Can compute approximate statistics from intractable distributions
31
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling Need full conditional distributions for variables Since we only sample z we need number of times word w assigned to topic j number of times topic j used in document d
32
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1
33
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
34
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
35
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
36
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
37
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
38
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
39
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2
40
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Gibbs sampling iteration 1 2 … 1000
41
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine pixel = word image = document sample each pixel from a mixture of topics A visual example: Bars
42
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine
44
Interpretable decomposition SVD gives a basis for the data, but not an interpretable one The true basis is not orthogonal, so rotation does no good
45
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Bayesian model selection How many topics T do we need? A Bayesian would consider the posterior: P(w|T) involves summing over all possible assignments z –but it can be approximated by sampling P(T|w) P(w|T) P(T)
46
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Corpus (w) P( w |T ) T = 10 T = 100 Bayesian model selection
47
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Corpus (w) P( w |T ) T = 10 T = 100 Bayesian model selection
48
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Corpus (w) P( w |T ) T = 10 T = 100 Bayesian model selection
49
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Back to the bars data set
50
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine PNAS corpus preprocessing Used all D = 28,154 abstracts from 1991-2001 Used any word occurring in at least five abstracts, not on “stop” list (W = 20,551) Segmentation by any delimiting character, total of n = 3,026,970 word tokens in corpus Also, PNAS class designations for 2001
51
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Running the algorithm Memory requirements linear in T(W+D), runtime proportional to nT T = 50, 100, 200, 300, 400, 500, 600, (1000) Ran 8 chains for each T, burn-in of 1000 iterations, 10 samples/chain at a lag of 100 All runs completed in under 30 hours on BlueHorizon supercomputer at San Diego
52
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine FORCE SURFACE MOLECULES SOLUTION SURFACES MICROSCOPY WATER FORCES PARTICLES STRENGTH POLYMER IONIC ATOMIC AQUEOUS MOLECULAR PROPERTIES LIQUID SOLUTIONS BEADS MECHANICAL HIV VIRUS INFECTED IMMUNODEFICIENCY CD4 INFECTION HUMAN VIRAL TAT GP120 REPLICATION TYPE ENVELOPE AIDS REV BLOOD CCR5 INDIVIDUALS ENV PERIPHERAL MUSCLE CARDIAC HEART SKELETAL MYOCYTES VENTRICULAR MUSCLES SMOOTH HYPERTROPHY DYSTROPHIN HEARTS CONTRACTION FIBERS FUNCTION TISSUE RAT MYOCARDIAL ISOLATED MYOD FAILURE STRUCTURE ANGSTROM CRYSTAL RESIDUES STRUCTURES STRUCTURAL RESOLUTION HELIX THREE HELICES DETERMINED RAY CONFORMATION HELICAL HYDROPHOBIC SIDE DIMENSIONAL INTERACTIONS MOLECULE SURFACE NEURONS BRAIN CORTEX CORTICAL OLFACTORY NUCLEUS NEURONAL LAYER RAT NUCLEI CEREBELLUM CEREBELLAR LATERAL CEREBRAL LAYERS GRANULE LABELED HIPPOCAMPUS AREAS THALAMIC A selection of topics TUMOR CANCER TUMORS HUMAN CELLS BREAST MELANOMA GROWTH CARCINOMA PROSTATE NORMAL CELL METASTATIC MALIGNANT LUNG CANCERS MICE NUDE PRIMARY OVARIAN
53
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine PARASITE PARASITES FALCIPARUM MALARIA HOST PLASMODIUM ERYTHROCYTES ERYTHROCYTE MAJOR LEISHMANIA INFECTED BLOOD INFECTION MOSQUITO INVASION TRYPANOSOMA CRUZI BRUCEI HUMAN HOSTS ADULT DEVELOPMENT FETAL DAY DEVELOPMENTAL POSTNATAL EARLY DAYS NEONATAL LIFE DEVELOPING EMBRYONIC BIRTH NEWBORN MATERNAL PRESENT PERIOD ANIMALS NEUROGENESIS ADULTS CHROMOSOME REGION CHROMOSOMES KB MAP MAPPING CHROMOSOMAL HYBRIDIZATION ARTIFICIAL MAPPED PHYSICAL MAPS GENOMIC DNA LOCUS GENOME GENE HUMAN SITU CLONES MALE FEMALE MALES FEMALES SEX SEXUAL BEHAVIOR OFFSPRING REPRODUCTIVE MATING SOCIAL SPECIES REPRODUCTION FERTILITY TESTIS MATE GENETIC GERM CHOICE SRY STUDIES PREVIOUS SHOWN RESULTS RECENT PRESENT STUDY DEMONSTRATED INDICATE WORK SUGGEST SUGGESTED USING FINDINGS DEMONSTRATE REPORT INDICATED CONSISTENT REPORTS CONTRAST A selection of topics MECHANISM MECHANISMS UNDERSTOOD POORLY ACTION UNKNOWN REMAIN UNDERLYING MOLECULAR PS REMAINS SHOW RESPONSIBLE PROCESS SUGGEST UNCLEAR REPORT LEADING LARGELY KNOWN MODEL MODELS EXPERIMENTAL BASED PROPOSED DATA SIMPLE DYNAMICS PREDICTED EXPLAIN BEHAVIOR THEORETICAL ACCOUNT THEORY PREDICTS COMPUTER QUANTITATIVE PREDICTIONS CONSISTENT PARAMETERS
54
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine PARASITE PARASITES FALCIPARUM MALARIA HOST PLASMODIUM ERYTHROCYTES ERYTHROCYTE MAJOR LEISHMANIA INFECTED BLOOD INFECTION MOSQUITO INVASION TRYPANOSOMA CRUZI BRUCEI HUMAN HOSTS ADULT DEVELOPMENT FETAL DAY DEVELOPMENTAL POSTNATAL EARLY DAYS NEONATAL LIFE DEVELOPING EMBRYONIC BIRTH NEWBORN MATERNAL PRESENT PERIOD ANIMALS NEUROGENESIS ADULTS CHROMOSOME REGION CHROMOSOMES KB MAP MAPPING CHROMOSOMAL HYBRIDIZATION ARTIFICIAL MAPPED PHYSICAL MAPS GENOMIC DNA LOCUS GENOME GENE HUMAN SITU CLONES MALE FEMALE MALES FEMALES SEX SEXUAL BEHAVIOR OFFSPRING REPRODUCTIVE MATING SOCIAL SPECIES REPRODUCTION FERTILITY TESTIS MATE GENETIC GERM CHOICE SRY STUDIES PREVIOUS SHOWN RESULTS RECENT PRESENT STUDY DEMONSTRATED INDICATE WORK SUGGEST SUGGESTED USING FINDINGS DEMONSTRATE REPORT INDICATED CONSISTENT REPORTS CONTRAST A selection of topics MECHANISM MECHANISMS UNDERSTOOD POORLY ACTION UNKNOWN REMAIN UNDERLYING MOLECULAR PS REMAINS SHOW RESPONSIBLE PROCESS SUGGEST UNCLEAR REPORT LEADING LARGELY KNOWN MODEL MODELS EXPERIMENTAL BASED PROPOSED DATA SIMPLE DYNAMICS PREDICTED EXPLAIN BEHAVIOR THEORETICAL ACCOUNT THEORY PREDICTS COMPUTER QUANTITATIVE PREDICTIONS CONSISTENT PARAMETERS
55
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine How many topics?
56
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine
57
Scientific syntax and semantics z w z z ww x x x semantics: probabilistic topics syntax: probabilistic regular grammar Factorization of language based on statistical dependency patterns: long-range, document specific dependencies short-range dependencies constant across all documents
58
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine HEART 0.2 LOVE 0.2 SOUL 0.2 TEARS 0.2 JOY 0.2 z = 1 0.4 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH 0.2 MATHEMATICS 0.2 z = 2 0.6 x = 1 THE 0.6 A 0.3 MANY 0.1 x = 3 OF 0.6 FOR 0.3 BETWEEN 0.1 x = 2 0.9 0.1 0.2 0.8 0.7 0.3
59
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine HEART 0.2 LOVE 0.2 SOUL 0.2 TEARS 0.2 JOY 0.2 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH 0.2 MATHEMATICS 0.2 THE 0.6 A 0.3 MANY 0.1 OF 0.6 FOR 0.3 BETWEEN 0.1 0.9 0.1 0.2 0.8 0.7 0.3 THE ……………………………… z = 1 0.4 z = 2 0.6 x = 1 x = 3 x = 2
60
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine HEART 0.2 LOVE 0.2 SOUL 0.2 TEARS 0.2 JOY 0.2 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH 0.2 MATHEMATICS 0.2 THE 0.6 A 0.3 MANY 0.1 OF 0.6 FOR 0.3 BETWEEN 0.1 0.9 0.1 0.2 0.8 0.7 0.3 THE LOVE…………………… z = 1 0.4 z = 2 0.6 x = 1 x = 3 x = 2
61
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine HEART 0.2 LOVE 0.2 SOUL 0.2 TEARS 0.2 JOY 0.2 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH 0.2 MATHEMATICS 0.2 THE 0.6 A 0.3 MANY 0.1 OF 0.6 FOR 0.3 BETWEEN 0.1 0.9 0.1 0.2 0.8 0.7 0.3 THE LOVE OF……………… z = 1 0.4 z = 2 0.6 x = 1 x = 3 x = 2
62
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine HEART 0.2 LOVE 0.2 SOUL 0.2 TEARS 0.2 JOY 0.2 SCIENTIFIC 0.2 KNOWLEDGE 0.2 WORK 0.2 RESEARCH 0.2 MATHEMATICS 0.2 THE 0.6 A 0.3 MANY 0.1 OF 0.6 FOR 0.3 BETWEEN 0.1 0.9 0.1 0.2 0.8 0.7 0.3 THE LOVE OF RESEARCH …… z = 1 0.4 z = 2 0.6 x = 1 x = 3 x = 2
63
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Semantic topics
64
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Syntactic classes REMAINED 581425263033 INARETHESUGGESTLEVELSRESULTSBEEN FORWERETHISINDICATENUMBERANALYSISMAY ONWASITSSUGGESTINGLEVELDATACAN BETWEENISTHEIRSUGGESTSRATESTUDIESCOULD DURINGWHENANSHOWEDTIMESTUDYWELL AMONGREMAINEACHREVEALEDCONCENTRATIONSFINDINGSDID FROMREMAINSONESHOWVARIETYEXPERIMENTSDOES UNDERREMAINEDANYDEMONSTRATERANGEOBSERVATIONSDO WITHINPREVIOUSLYINCREASEDINDICATINGCONCENTRATIONHYPOTHESISMIGHT THROUGHOUTBECOMEEXOGENOUSPROVIDEDOSEANALYSESSHOULD THROUGHBECAMEOURSUPPORTFAMILYASSAYSWILL TOWARDBEINGRECOMBINANTINDICATESSETPOSSIBILITYWOULD INTOBUTENDOGENOUSPROVIDESFREQUENCYMICROSCOPYMUST ATGIVETOTALINDICATEDSERIESPAPERCANNOT INVOLVINGMEREPURIFIEDDEMONSTRATEDAMOUNTSWORK THEY AFTERAPPEAREDTILESHOWSRATESEVIDENCEALSO ACROSSAPPEARFULLSOCLASSFINDING AGAINSTALLOWEDCHRONICREVEALVALUESMUTAGENESISBECOME WHENNORMALLYANOTHERDEMONSTRATESAMOUNTOBSERVATIONMAG ALONGEACHEXCESSSUGGESTEDSITESMEASUREMENTSLIKELY
65
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine (PNAS, 1991, vol. 88, 4874-4876) A 23 generalized 49 fundamental 11 theorem 20 of 4 natural 46 selection 46 is 32 derived 17 for 5 populations 46 incorporating 22 both 39 genetic 46 and 37 cultural 46 transmission 46. The 14 phenotype 15 is 32 determined 17 by 42 an 23 arbitrary 49 number 26 of 4 multiallelic 52 loci 40 with 22 two 39 -factor 148 epistasis 46 and 37 an 23 arbitrary 49 linkage 11 map 20, as 43 well 33 as 43 by 42 cultural 46 transmission 46 from 22 the 14 parents 46. Generations 46 are 8 discrete 49 but 37 partially 19 overlapping 24, and 37 mating 46 may 33 be 44 nonrandom 17 at 9 either 39 the 14 genotypic 46 or 37 the 14 phenotypic 46 level 46 (or 37 both 39 ). I 12 show 34 that 47 cultural 46 transmission 46 has 18 several 39 important 49 implications 6 for 5 the 14 evolution 46 of 4 population 46 fitness 46, most 36 notably 4 that 47 there 41 is 32 a 23 time 26 lag 7 in 22 the 14 response 28 to 31 selection 46 such 9 that 47 the 14 future 137 evolution 46 depends 29 on 21 the 14 past 24 selection 46 history 46 of 4 the 14 population 46. (graylevel = “semanticity”, the probability of using LDA over HMM)
66
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine (PNAS, 1996, vol. 93, 14628-14631) The 14 ''shape 7 '' of 4 a 23 female 115 mating 115 preference 125 is 32 the 14 relationship 7 between 4 a 23 male 115 trait 15 and 37 the 14 probability 7 of 4 acceptance 21 as 43 a 23 mating 115 partner 20, The 14 shape 7 of 4 preferences 115 is 32 important 49 in 5 many 39 models 6 of 4 sexual 115 selection 46, mate 115 recognition 125, communication 9, and 37 speciation 46, yet 50 it 41 has 18 rarely 19 been 33 measured 17 precisely 19, Here 12 I 9 examine 34 preference 7 shape 7 for 5 male 115 calling 115 song 125 in 22 a 23 bushcricket *13 (katydid *48 ). Preferences 115 change 46 dramatically 19 between 22 races 46 of 4 a 23 species 15, from 22 strongly 19 directional 11 to 31 broadly 19 stabilizing 45 (but 50 with 21 a 23 net 49 directional 46 effect 46 ), Preference 115 shape 46 generally 19 matches 10 the 14 distribution 16 of 4 the 14 male 115 trait 15, This 41 is 32 compatible 29 with 21 a 23 coevolutionary 46 model 20 of 4 signal 9 -preference 115 evolution 46, although 50 it 41 does 33 nor 37 rule 20 out 17 an 23 alternative 11 model 20, sensory 125 exploitation 150. Preference 46 shapes 40 are 8 shown 35 to 31 be 44 genetic 11 in 5 origin 7.
67
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine (PNAS, 1996, vol. 93, 14628-14631) The 14 ''shape 7 '' of 4 a 23 female 115 mating 115 preference 125 is 32 the 14 relationship 7 between 4 a 23 male 115 trait 15 and 37 the 14 probability 7 of 4 acceptance 21 as 43 a 23 mating 115 partner 20, The 14 shape 7 of 4 preferences 115 is 32 important 49 in 5 many 39 models 6 of 4 sexual 115 selection 46, mate 115 recognition 125, communication 9, and 37 speciation 46, yet 50 it 41 has 18 rarely 19 been 33 measured 17 precisely 19, Here 12 I 9 examine 34 preference 7 shape 7 for 5 male 115 calling 115 song 125 in 22 a 23 bushcricket *13 (katydid *48 ). Preferences 115 change 46 dramatically 19 between 22 races 46 of 4 a 23 species 15, from 22 strongly 19 directional 11 to 31 broadly 19 stabilizing 45 (but 50 with 21 a 23 net 49 directional 46 effect 46 ), Preference 115 shape 46 generally 19 matches 10 the 14 distribution 16 of 4 the 14 male 115 trait 15. This 41 is 32 compatible 29 with 21 a 23 coevolutionary 46 model 20 of 4 signal 9 -preference 115 evolution 46, although 50 it 41 does 33 nor 37 rule 20 out 17 an 23 alternative 11 model 20, sensory 125 exploitation 150. Preference 46 shapes 40 are 8 shown 35 to 31 be 44 genetic 11 in 5 origin 7.
68
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine End of presentation on topic models… …. switch now to Author-topic model
69
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Recent Results on Author-Topic Models
70
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Can we model authors, given documents? (more generally, build statistical profiles of entities given sparse observed data)
71
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics Model = Author-Topic distributions + Topic-Word distributions Parameters learned via Bayesian learning
72
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
73
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
74
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
75
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
76
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
77
Data Mining Lectures Lecture 12: Text Mining Padhraic Smyth, UC Irvine Authors Words Hidden Topics
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.