Presentation is loading. Please wait.

Presentation is loading. Please wait.

Yangqiu Song Lane Department of CSEE West Virginia University

Similar presentations


Presentation on theme: "Yangqiu Song Lane Department of CSEE West Virginia University"— Presentation transcript:

1 Yangqiu Song Lane Department of CSEE West Virginia University
Much of the work was done at UIUC Text Classification without Supervision: Incorporating World Knowledge and Domain Adaptation Yangqiu Song Lane Department of CSEE West Virginia University

2 Collaborators Dan Roth Haixun Wang Shusen Wang Weizhu Chen

3 Text Categorization Traditional machine learning approach: Label data
Train a classifier Make prediction

4 Challenges Domain expert annotation Diverse domains and tasks
Large scale problems Diverse domains and tasks Topics Languages Short and noisy texts Tweets, Queries,

5 Reduce Labeling Efforts
A more general way? Many diverse and fast changing domains Search engine Social media Domain specific task: entertainment or sports? Semi-supervised learning Transfer learning Zero-shot learning

6 Our Solution Knowledge enabled learning
Millions of entities and concepts Billions of relationships Labels carry a lot of information! Traditional models treat labels as “numbers or IDs”

7 Example: Knowledge Enabled Text Classification
Dong Nguyen announced that he would be removing his hit game Flappy Bird from both the iOS and Android app stores, saying that the success of the game is something he never wanted. Some fans of the game took it personally, replying that they would either kill Nguyen or kill themselves if he followed through with his decision. Pick a label: Class1 or Class2 ? Mobile Game or Sports

8 Dataless Text Categorization: Classification on the Fly
Mobile Game or Sports? (Good) Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels M.-W. Chang, L.-A. Ratinov, D. Roth, V. Srikumar: Importance of Semantic Representation: Dataless Classification. AAAI 2008. Y. Song, D. Roth: On dataless hierarchical text classification. (AAAI)

9 Challenges of Using Knowledge
Scalability; Domain adaptation; Open domain classes Knowledge specification; Disambiguation Representation Inference Learning Show some interesting examples Data vs. knowledge representation Compare different representations

10 Outline of the Talk Dataless Text Classification: Classify Documents on the Fly
Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels

11 Difficulty of Text Representation
Polysemy and Synonym cat company fruit tree Meaning Typicality scores Ambiguity Variability Basic level concepts Language cat feline kitty moggy apple Rosch, E. et al. Basic objects in natural categories. Cognitive Psychology Rosch, E. Principles of categorization. In Rosch, E., and Lloyd, B., eds., Cognition and Categorization

12 Typicality of Entities
bird

13 Basic Level Concepts What do we usually call it? pug pet dog mammal
animal

14 We use the right level of concepts to describe things!
pug bulldog We use the right level of concepts to describe things!

15 Probase: A Probabilistic Knowledge Base
Web Document Cleaning Information Extraction Semantic Cleaning Knowledge Integration 1.68 billions documents Hearst patterns “Animals such as dogs and cats.” Freebase Wikipedia Mutual exclusive concepts M. A. Hearst. Automatic acquisition of hyponyms from large text corpora. Int. Conf. on Comp. Ling. (COLING).1992. W. Wu, et al. Probase: A probabilistic taxonomy for text understanding. In ACM SIG on Management of Data (SIGMOD) (Data released

16 Concept Distribution Distribution of Concepts city country disease
magazine bank local school Java tool big bank BI product Distribution of Concepts

17 Typicality Animal Dog

18 Basic Level Concepts Robin Penguin

19 Concepts of Multiple Entities
Obama’s real-estate policy president, politician investment, property, asset, plan president, politician, investment, property, asset, plan Explicit Semantic Analysis (ESA) + w1 w2 wn = E. Gabrilovich and S. Markovitch. Wikipedia-based Semantic Interpretation for Natural Language Processing. J. of Art. Intell. Res. (JAIR)

20 Multiple Related Entities
apple adobe software company, brand, fruit brand, software company software company, brand, fruit software company, brand Intersection instead of union!

21 Probabilistic Conceptualization
P(concept | related entities) P(fruit | adobe, apple) = 0 P(adobe | fruit) = 0 Basic Level Concept Typicality Song et al., Int. Joint Conf. on Artif. Intell. (IJCAI)

22 Given “China, India, Russia, Brazil”

23 Given “China, India, Japan, Singapore”

24 Outline of the Talk Dataless Text Classification: Classify Documents on the Fly
Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels

25 Generic Short Text Conceptualization
P(concept | short text) 1. Grounding to knowledge base 2. Clustering entities 3. Inside clusters: intersection 4. Between clusters: union

26 Markov Random Field Model
Parameter estimation: concept distribution Entity clique: intersection To realize the general text conceptualization, we built a markov random field model. First of all, we use a EM-like algorithm to detect the type of the entities. For example population can be an instance of concept demographic data, but it can also be an attribute of concept country. If it co-occurs with a country such as united states, then we annotate population as an attribute. Then we build the entity cliques based on clustering algorithm. Inside the cliques, we build the energy function of Markov random field using the intersection rule. Then with different cliques, we perform parameter estimation, which is similar to the average of all the concepts for different cliques. Entity type: instance or attribute

27 Given “U.S.”, “Japan”, “U.K.”; “apple”, “pear”; “BBC”, ”New York Time”

28 Tweet Clustering Better access the right level of concepts
Song et al., Int. Joint Conf. on Artif. Intell. (IJCAI)

29 Web Search Relevance Evaluation data: Historical data:
300K Web queries 19M query-URL pairs Historical data: 8M URLs 8B query-URL clicks Song et al., Int. Conf. on Inf. and Knowl. Man. (CIKM)

30 Domain Adaptation World knowledge bases Domain dependent tasks
General purpose Information bias Domain dependent tasks E.g., classification/clustering of entertainment vs. sports Knowledge about science/technology is useless

31 Domain Adaptation for Corpus
Hyper-parameter estimation: domain adaptation Complexity: Parameter estimation: concept distribution Entity clique: intersection Entity type: instance or attribute

32 Domain Adaptation Results
Song et al., Int. Joint Conf. on Artif. Intell. (IJCAI)

33 Similarity and Relatedness
a specific type of relatedness synonyms, hyponyms/hypernyms, and siblings are highly similar doctor vs. surgeon, bike vs. bicycle Relatedness topically related or based on any other semantic relation heart vs. surgeon, tire vs. car In the following, we focus on Wikipedia! The methodologies apply Entity relatedness Domain adaptation

34 Dataless Text Classification: Classify Documents on the Fly
Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels

35 Classification in the Same Semantic Space
Mobile Game or Sports? Explicit Semantic Analysis (ESA) + w1 w2 wn = E. Gabrilovich and S. Markovitch. Wikipedia-based Semantic Interpretation for Natural Language Processing. J. of Art. Intell. Res. (JAIR)

36 Classification of 20 Newsgroups Documents: Cosine Similarity
L1: 6 classes L2: 20 classes OHLDA: Same hierarchy Word2vec Trained on wiki Skipgram V.Ha-Thuc, and J.-M. Renders, Large-scale hierarchical text classification without labelled data. In WSDM 2011. Blei et al., Latent Dirichlet Allocation. J. of Mach. Learn. Res. (JMLR) Mikolov et al. Efficient Estimation of Word Representations in Vector Space. NIPS

37 Two Factors in Dataless Classification
Length of document Number of labels 2-newsgroups 20-newsgroups 103 RCV1 Topics 611 Wiki Cates Random Guess Balanced binary classification Multi-class classification

38 Similarity Cosine Champaign Police Make Arrest Armed Robbery Cases Two
Arrested UI Campus 1 1 Text 1 Text 2

39 Representation Densification
Vector x Vector y Cosine 1.0 Average Max matching 0.7 Hungarian matching

40 rec.autos vs. sci.electronics (1/16 document: 13 words per text)
Song and Roth. North Amer. Chap. Assoc. Comp. Ling. (NAACL)

41 Dataless Text Classification: Classify Documents on the Fly
Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels

42 Classification of 20 Newsgroups Documents
Cosine Similarity Supervised Classification Blei et al., Latent Dirichlet Allocation. J. of Mach. Learn. Res. (JMLR) Mikolov et al. Efficient Estimation of Word Representations in Vector Space. Adv. Neur. Info. Proc. Sys. (NIPS)

43 Bootstrapping with Unlabeled Data
Application of world knowledge of label meaning Initialize N documents for each label Pure similarity based classifications Train a classifier to label N more documents Continue to label more data until no unlabeled document exists Domain adaptation Mobile games Sports

44 Classification of 20 Newsgroups Documents
Bootstrapped: 0.84 Pure Similarity: 0.68 Song and Roth. Assoc. Adv. Artif. Intell. (AAAI). 2014

45 Hierarchical Classification: Considering Label Dependency
Top-down classification Bottom-up classification (flat classification) Root A B N 1 2 M 1 2 1 2

46 Top-down vs. Bottom-up RCV1 804,414 documents
82 categories in 4 levels 103 nodes in hierarchy 3.24 labels/document Song and Roth. Assoc. Adv. Artif. Intell. (AAAI). 2014

47 Dataless Text Classification: Classify Documents on the Fly
Label names Map labels/documents to the same space Compute document and label similarities Documents World knowledge Choose labels

48 Labeled data in training
Unlabeled data in training Label names in training I.I.D. between training and testing Supervised learning Yes No Unsupervised learning Semi-supervised learning Transfer learning Zero-shot learning Dataless Classification (pure similarity) (bootstrapping)

49 Using knowledge as structured information instead of flat features!
Conclusions Dataless classification Reduce labeling work for thousands of documents Compared semantic representation using world knowledge Probabilistic conceptualization (PC) Explicit semantic analysis (ESA) Word embedding (word2vec) Topic model (LDA) Combination of ESA and word2vec Unified PC and ESA Markov random field model Domain adaptation Hyper-parameter estimation Boostrapping – refining the classifier Advertisement: Using knowledge as structured information instead of flat features! Session 7B, DM835 Thank You! 

50

51 Correlation with Human Annotation of IS-A Relationships
The Web Gigaword corpus Combining Heterogeneous Models for Measuring Relational Similarity. A. Zhila, W. Yih, C. Meek, G. Zweig & T. Mikolov. In NAACL-HLT-13.


Download ppt "Yangqiu Song Lane Department of CSEE West Virginia University"

Similar presentations


Ads by Google