Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research on Intelligent Image Indexing and Retrieval James Z. Wang School of Information Sciences and Technology The Pennsylvania State University Joint.

Similar presentations


Presentation on theme: "Research on Intelligent Image Indexing and Retrieval James Z. Wang School of Information Sciences and Technology The Pennsylvania State University Joint."— Presentation transcript:

1 Research on Intelligent Image Indexing and Retrieval James Z. Wang School of Information Sciences and Technology The Pennsylvania State University Joint work: Jia Li Department of Statistics The Pennsylvania State University

2 Photographer annotation: Greece (Athens, Crete, May/June 2003), taken by James Z. Wang

3 “Building, sky, lake, landscape, Europe, tree” Can a computer do this?

4 EMPEROR database by C.-c. Chen

5 Outline Introduction Our related SIMPLIcity work ALIP: Automatic modeling and learning of concepts Conclusions and future work

6 The field: Image Retrieval The retrieval of relevant images from an image database on the basis of automatically-derived image features Applications: biomedicine, homeland security, law enforcement, NASA, defense, commercial, cultural, education, entertainment, Web, …… Our approach: Wavelets Statistical modeling Supervised and unsupervised learning… Address the problem in a generic way for different applications

7 Chicana Art Project, 1995 1000+ high quality paintings of Stanford Art Library Goal: help students and researchers to find visually related paintings Used wavelet-based features [Wang+,1997]

8 Feature-based Approach + Handles low-level semantic queries + Many features can be extracted -- Cannot handle higher-level queries (e.g.,objects) Signature feature 1 feature 2 …… feature n

9 Region-based Approach Extract objects from images first + Handles object-based queries e.g., find images with objects that are similar to some given objects + Reduce feature storage adaptively -- Object segmentation is very difficult -- User interface: region marking, feature combination

10 UCB Blobworld [Carson+, 1999]

11 Outline Introduction Our related SIMPLIcity work ALIP: Automatic modeling and learning of concepts Conclusions and future work

12 Motivations Observations: Human object segmentation relies on knowledge Precise computer image segmentation is a very difficult open problem Hypothesis: It is possible to build robust computer matching algorithms without first segmenting the images accurately

13 Our SIMPLIcity Work [PAMI, 2001(1)] [PAMI, 2001(9)][PAMI, 2002(9)] Semantics-sensitive Integrated Matching for Picture LIbraries Major features Sensitive to semantics: combine statistical semantic classification with image retrieval Efficient processing: wavelet- based feature extraction Reduced sensitivity to inaccurate segmentation and simple user interface: Integrated Region Matching (IRM)

14 Wavelets

15 Fast Image Segmentation Partition an image into 4×4 blocks Extract wavelet-based features from each block Use k-means algorithm to cluster feature vectors into ‘regions’ Compute the shape feature by normalized inertia

16 K-means Statistical Clustering Some segmentation algorithms: 8 minute CPU time per image Our approach: use unsupervised statistical learning method to analyze the feature space Goal: minimize the mean squared error between the training samples and their representative prototypes Learning VQ [ Hastie+, Elements of Statistical Learning, 2001 ]

17 IRM: Integrated Region Matching IRM defines an image-to-image distance as a weighted sum of region-to-region distances Weighting matrix is determined based on significance constrains and a ‘MSHP’ greedy algorithm

18 A 3-D Example for IRM

19 IRM: Major Advantages 1. Reduces the influence of inaccurate segmentation 2. Helps to clarify the semantics of a particular region given its neighbors 3. Provides the user with a simple interface

20 Experiments and Results Speed 800 MHz Pentium PC with LINUX OS Databases: 200,000 general-purpose image DB (60,000 photographs + 140,000 hand-drawn arts) 70,000 pathology image segments Image indexing time: one second per image Image retrieval time: Without the scalable IRM, 1.5 seconds/query CPU time With the scalable IRM, 0.15 second/query CPU time External query: one extra second CPU time

21 RANDOM SELECTION DB: 200,000 COREL

22 Current SIMPLIcity System Query Results

23 External Query

24 EMPEROR Project C.-C. Chen Simmons College

25 (1) Random Browsing

26 (2) Similarity Search

27

28 (3) External Image Query

29 Outline Introduction Our related SIMPLIcity work ALIP: Automatic modeling and learning of concepts Conclusions and future work

30 Why ALIP? Size 1 million images Understandability & Vision “meaning” depend on the point-of-view Can we translate contents and structure into linguistic terms dogs Kyoto

31 (cont.) Query formulation SIMILARITY: look similar to a given picture OBJECT: contains a historical building OBJECT RELATIONSHIP: contains a cat and a person MOOD: a happy picture TIME/PLACE: sunset of Athens

32 Automatic Linguistic Indexing of Pictures (ALIP) An exciting research direction Differences from computer vision ALIP: deal with a large number of concepts ALIP: rarely find enough number of “good” (diversified/3D?) training images ALIP: build knowledge bases automatically for real-time linguistic indexing (generic method) ALIP: highly interdisciplinary (AI, statistics, mining, imaging, applied math, domain knowledge, ……)

33 Automatic Modeling and Learning of Concepts for Image Indexing Observations: Human beings are able to build models about objects or concepts by mining visual scenes The learned models are stored in the brain and used in the recognition process Hypothesis: It is achievable for computers to mine and learn a large collection of concepts by 2D or 3D image-based training 2000-now: [ACM MM, 2002][PAMI 2003(10)]

34 Concepts to be Trained Concepts: Basic building blocks in determining the semantic meanings of images Training concepts can be categorized as: Basic Object: flower, beach Object composition: building+grass+sky+tree Location: Asia, Venice Time: night sky, winter frost Abstract: sports, sadness High-level Low-level

35 Database: most significant Asian paintings (Copyright: museums/governments/…) Question: can we build a “dictionary” of different painting styles?

36 Database: terracotta soldiers of the First Emperor of China Question: can we train a computer to annotate these? EMPEROR images of C.-c. Chen

37 System Design Train statistical models of a dictionary of concepts using sets of training images 2D images are currently used 3D-image training can be much better Compare images based on model comparison Select the most statistical significant concept(s) to index images linguistically Initial experiment: 600 concepts, each trained with 40 images 15 minutes Pentium CPU time per concept, train only once highly parallelizable algorithm

38 Training Process

39 Automatic Annotation Process

40 Training Training images used to train the concept “male” with description “man, male, people, cloth, face”

41 Initial Model: 2-D Wavelet MHMM [Li+, 1999] Model: Inter-scale and intra-scale dependence States: hierarchical Markov mesh, unobservable Features in SIMPLIcity: multivariate Gaussian distributed given states A model is a knowledge base for a concept

42 2D MHMM Start from the conventional 1-D HMM Extend to 2D transitions Conditional Gaussian distributed feature vectors Then add Markovian statistical dependence across resolutions Use EM algorithm to estimate parameters

43 Annotation Process Statistical significances are computed to annotate images Favor the selection of rare words When n, m >> k, we have

44 Preliminary Results Computer Prediction: people, Europe, man-made, water Building, sky, lake, landscape, Europe, tree People, Europe, female Food, indoor, cuisine, dessert Snow, animal, wildlife, sky, cloth, ice, people

45 More Results

46 Results: using our own photographs P: Photographer annotation Underlined words: words predicted by computer (Parenthesis): words not in the learned “dictionary” of the computer

47 10 classes: Africa, beach, buildings, buses, dinosaurs, elephants, flowers, horses, mountains, food. Systematic Evaluation

48 600-class Classification Task: classify a given image to one of the 600 semantic classes Gold standard: the photographer/publisher classification This procedure provides lower-bounds of the accuracy measures because: There can be overlaps of semantics among classes (e.g., “Europe” vs. “France” vs. “Paris”, or, “tigers I” vs. “tigers II”) Training images in the same class may not be visually similar (e.g., the class of “sport events” include different sports and different shooting angles) Result: with 11,200 test images, 15% of the time ALIP selected the exact class as the best choice I.e., ALIP is about 90 times more intelligent than a system with random-drawing system

49 Can ALIP learn to be a domain specialist? For each concept, train the ALIP system with a couple to several EMPEROR images and their metadata Use the trained models about these concepts to annotate images

50 Experiments Eight (8) images are used to train the concept “Terracotta Warriors and Horses” Four (4) images are used to train the concept “Roof Tile-end”

51 Classification Tests Terracotta warriors and horses: 98% The Great Wall: 98% Roof Tile-end: 82% Terracotta warriors – head: 96% Afang Palace: 82%

52 The only wrongly annotated images and their true annotations

53 Preliminary Results on Painting Images

54 Classification of Painters Five painters: SHEN Zhou (Ming Dynasty), DONG Qichang (Ming), GAO Fenghan (Qing), WU Changshuo (late Qing), ZHANG Daqian (modern China)

55 Outline Introduction Our related SIMPLIcity work ALIP: Automatic modeling and learning of concepts Conclusions and future work

56 Conclusions Automatic Linguistic Indexing of Pictures Highly challenging but crucially important Interdisciplinary collaboration is critical Approaches: Penn State, UC Berkeley,… Our ALIP System: Automatic modeling and learning of semantic concepts 600 concepts can be learned automatically Application of ALIP to historical materials

57 Advantages of Our Approach Accumulative learning Highly scalable (unlike CART, SVM, ANN) Flexible: Amount of training depends on the complexity of the concept Context-dependent: Spatial relations among pixels taken into consideration Universal image similarity: statistical likelihood rather than relying on segmentation

58 Limitations Training with 2D images which have no information about the actual object sizes Training with not enough number of images for complex concepts Low diversity of available training images Iterative training with negative examples? Training with conflicting opinions? (learning the same subject from two teachers with different viewpoints) ……

59 Future Work Explore new methods for better accuracy refine statistical modeling of images learning from 3D (may use 2D-to-3D technologies if cannot acquire directly in 3D) refine matching schemes Apply these methods to special image databases (e.g., art, biomedicine) very large databases (e.g., Web images) Integration with large-scale information systems ……

60 Acknowledgments NSF ITR (since 08/2002) Endowed professorship from the PNC Foundation Equipment grants from SUN and NSF Penn State Univ. Earlier funding (1995-2000): IBM QBIC, NEC AMORA, SRI AI, Stanford Lib/Math/Biomedical Informatics/CS, Lockheed Martin, NSF DL2

61 More Information http://wang.ist.psu.edu Papers in PDF, demos, etc Monographs: 1.J. Z. Wang, Integrated Region-based Image Retrieval, Kluwer, 2001. 2.J. Li, R. M. Gray, Image Segmentation and Compression using Hidden Markov Models, Kluwer, 2000.


Download ppt "Research on Intelligent Image Indexing and Retrieval James Z. Wang School of Information Sciences and Technology The Pennsylvania State University Joint."

Similar presentations


Ads by Google