February, 26 2007 Content-Based Image Retrieval Saint-Petersburg State University Natalia Vassilieva Il’ya Markov

Slides:



Advertisements
Similar presentations
Image Retrieval With Relevant Feedback Hayati Cam & Ozge Cavus IMAGE RETRIEVAL WITH RELEVANCE FEEDBACK Hayati CAM Ozge CAVUS.
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Relevance Feedback and User Interaction for CBIR Hai Le Supervisor: Dr. Sid Ray.
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Lecture 12 Content-Based Image Retrieval
Computer Vision Group, University of BonnVision Laboratory, Stanford University Abstract This paper empirically compares nine image dissimilarity measures.
Relevance Feedback Content-Based Image Retrieval Using Query Distribution Estimation Based on Maximum Entropy Principle Irwin King and Zhong Jin Nov
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
1 Content Based Image Retrieval Using MPEG-7 Dominant Color Descriptor Student: Mr. Ka-Man Wong Supervisor: Dr. Lai-Man Po MPhil Examination Department.
Morris LeBlanc.  Why Image Retrieval is Hard?  Problems with Image Retrieval  Support Vector Machines  Active Learning  Image Processing ◦ Texture.
ISP 433/533 Week 2 IR Models.
NCKU CSIE Visualization & Layout for Image Libraries Baback Moghaddam, Qi Tian IEEE Int’l Conf. on CVPR 2001 Speaker: 蘇琬婷.
Image Search Presented by: Samantha Mahindrakar Diti Gandhi.
An efficient and effective region-based image retrieval framework Reporter: Francis 2005/5/12.
Automatic Image Annotation and Retrieval using Cross-Media Relevance Models J. Jeon, V. Lavrenko and R. Manmathat Computer Science Department University.
1 Statistical correlation analysis in image retrieval Reporter : Erica Li 2004/9/30.
ACM Multimedia th Annual Conference, October , 2004
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Integrating User Feedback Log into Relevance Feedback by Coupled SVM for Content-Based Image Retrieval 9-April, 2005 Steven C. H. Hoi *, Michael R. Lyu.
A novel log-based relevance feedback technique in content- based image retrieval Reporter: Francis 2005/6/2.
1998/5/21by Chang I-Ning1 ImageRover: A Content-Based Image Browser for the World Wide Web Introduction Approach Image Collection Subsystem Image Query.
Visual Querying By Color Perceptive Regions Alberto del Bimbo, M. Mugnaini, P. Pala, and F. Turco University of Florence, Italy Pattern Recognition, 1998.
1 An Empirical Study on Large-Scale Content-Based Image Retrieval Group Meeting Presented by Wyman
Presented by Zeehasham Rasheed
Information retrieval Finding relevant data using irrelevant keys Example: database of photographic images sorted by number, date. DBMS: Well structured.
Ranking by Odds Ratio A Probability Model Approach let be a Boolean random variable: document d is relevant to query q otherwise Consider document d as.
A fuzzy video content representation for video summarization and content-based retrieval Anastasios D. Doulamis, Nikolaos D. Doulamis, Stefanos D. Kollias.
Relevance Feedback Content-Based Image Retrieval Using Query Distribution Estimation Based on Maximum Entropy Principle Irwin King and Zhong Jin The Chinese.
Optimizing Learning with SVM Constraint for Content-based Image Retrieval* Steven C.H. Hoi 1th March, 2004 *Note: The copyright of the presentation material.
SIEVE—Search Images Effectively through Visual Elimination Ying Liu, Dengsheng Zhang and Guojun Lu Gippsland School of Info Tech,
Information Retrieval in Practice
Content-Based Video Retrieval System Presented by: Edmund Liang CSE 8337: Information Retrieval.
Image Annotation and Feature Extraction
Wavelet-Based Multiresolution Matching for Content-Based Image Retrieval Presented by Tienwei Tsai Department of Computer Science and Engineering Tatung.
© 2008 Hewlett-Packard Company Content Based Image Retrieval Natalia Vassilieva HP Labs Russia.
Exploiting Ontologies for Automatic Image Annotation M. Srikanth, J. Varner, M. Bowden, D. Moldovan Language Computer Corporation
Using Large-Scale Web Data to Facilitate Textual Query Based Retrieval of Consumer Photos Yiming Liu, Dong Xu, Ivor W. Tsang, Jiebo Luo Nanyang Technological.
Content-Based Image Retrieval
Glasgow 02/02/04 NN k networks for content-based image retrieval Daniel Heesch.
Ranking and Classifying Attractiveness of Photos in Folksonomies Jose San Pedro and Stefan Siersdorfer University of Sheffield, L3S Research Center WWW.
10/24/2015 Content-Based Image Retrieval: Feature Extraction Algorithms EE-381K-14: Multi-Dimensional Digital Signal Processing BY:Michele Saad
80 million tiny images: a large dataset for non-parametric object and scene recognition CS 4763 Multimedia Systems Spring 2008.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
IEEE Int'l Symposium on Signal Processing and its Applications 1 An Unsupervised Learning Approach to Content-Based Image Retrieval Yixin Chen & James.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
A Model for Learning the Semantics of Pictures V. Lavrenko, R. Manmatha, J. Jeon Center for Intelligent Information Retrieval Computer Science Department,
VisDB: Database Exploration Using Multidimensional Visualization Maithili Narasimha 4/24/2001.
1 A Compact Feature Representation and Image Indexing in Content- Based Image Retrieval A presentation by Gita Das PhD Candidate 29 Nov 2005 Supervisor:
Probabilistic Latent Query Analysis for Combining Multiple Retrieval Sources Rong Yan Alexander G. Hauptmann School of Computer Science Carnegie Mellon.
Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South.
Effective Automatic Image Annotation Via A Coherent Language Model and Active Learning Rong Jin, Joyce Y. Chai Michigan State University Luo Si Carnegie.
Content-Based Image Retrieval (CBIR) By: Victor Makarenkov Michael Marcovich Noam Shemesh.
Yixin Chen and James Z. Wang The Pennsylvania State University
A Distributed Multimedia Data Management over the Grid Kasturi Chatterjee Advisors for this Project: Dr. Shu-Ching Chen & Dr. Masoud Sadjadi Distributed.
An MPEG-7 Based Semantic Album for Home Entertainment Presented by Chen-hsiu Huang 2003/08/12 Presented by Chen-hsiu Huang 2003/08/12.
An Image Retrieval Approach Based on Dominant Wavelet Features Presented by Te-Wei Chiang 2006/4/1.
A Genetic Algorithm-Based Approach to Content-Based Image Retrieval Bo-Yen Wang( 王博彥 )
The Development of a search engine & Comparison according to algorithms Sung-soo Kim The final report.
VISUAL INFORMATION RETRIEVAL Presented by Dipti Vaidya.
Relevance Feedback in Image Retrieval System: A Survey Tao Huang Lin Luo Chengcui Zhang.
Jianping Fan Department of Computer Science University of North Carolina at Charlotte Charlotte, NC Relevance Feedback for Image Retrieval.
NN k Networks for browsing and clustering image collections Daniel Heesch Communications and Signal Processing Group Electrical and Electronic Engineering.
Content-Based Image Retrieval Using Color Space Transformation and Wavelet Transform Presented by Tienwei Tsai Department of Information Management Chihlee.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Visual Information Retrieval
Personalized Social Image Recommendation
REMOTE SENSING Multispectral Image Classification
Color Image Retrieval based on Primitives of Color Moments
Color Image Retrieval based on Primitives of Color Moments
Presentation transcript:

February, Content-Based Image Retrieval Saint-Petersburg State University Natalia Vassilieva Il’ya Markov Alexander Dolnik

February, Our team  Natalia Vassilieva  Alexander Dolnik  Ilya Markov  Maria Teplyh  Maria Davydova  Dmitry Shubakov  Alexander Yaremchuk

February, General problems  Semantic gap between system and human mode of image analysis  Specific of human visual perception  How to catch semantics of an image  Signature calculation and response time  Combining different features and metrics

February, Image retrieval system General goal: an image retrieval system that is able to process natural language query that is able to search among annotated and non-annotated images that takes into account human visual perception that processes various features (color, texture, shapes) that uses relevance feedback for query refinement, adaptive search How to minimize “semantic gap”? semanticlow-level features semantic gap

February, CBIR : Traditional approach indexation retrieval signature calculation database signature calculation comparison result image query Relevance feedback: query refinement fusion of results: independent search by different features color space partition according to human perception auto-annotation annotations refinement multidimensional indexing (vp-tree)

February, Research directions  Color space partition according to human visual perception  Correspondence between low-level features and semantics: auto-annotation  Fusion of retrieval result sets  Adaptive search: color and texture fusion  Using relevance feedback

February, Human visual perception: colors Experiments with color partition: HSV space (H=9; S=2; V=3) – 72 % (H=11; S=2; V=3) – 66% (H=13; S=2; V=3) – 63% (H=15; S=2; V=3) – 60% Compare partitions of different spaces (RGB, HSV, Lab)

February, Research directions  Color space partition according to human visual perception  Correspondence between low-level features and semantics: auto-annotation  Fusion of retrieval result sets  Adaptive search: color and texture fusion  Using relevance feedback

February, Auto-annotation Natalia Vassilieva, Boris Novikov. Establishing a correspondence between low-level features and semantics of fixed images. In Proceedings of the Seventh National Russian Research Conference RCDL'2005, Yaroslavl, October , 2005  Training set selection  Color feature extraction for every image from the set  Similarity calculation for every pair of images from the set  Training set clustering  Basis color features calculation: one per every cluster  Definition of basis lexical features  Correspondence between basis color features and basis lexical features

February, Examples city, night, road, riversnow, winter, sky, mountain

February, Retrieve by textual query N. Vassilieva and B. Novikov. A Similarity Retrieval Algorithm for Natural Images. Proc. of the Baltic DB&IS'2004, Riga, Latvia, Scientific Papers University of Latvia, June 2004  Image database is divided into clusters  Search for appropriate cluster by textual query using cluster’s annotations  Browse the images from the appropriate cluster  Use relevance feedback to refine the query  Use relevance feedback to reorganize the clusters and assign new annotations

February, Feature extraction: color  Color: histograms  Color: statistical approach First moments for color distribution (every channel) and covariations

February, Feature extraction: texture  Texture: use independent component filters that results from ICA H. Borgne, A. Guerin-Dugue, A. Antoniadis “Representation of images for classification with independent features” Image I 1 Image I 2 … N filtres dist(I 1,I 2 ) = KL H (H 1i, H 2i ) Σ i=1 N

February, Research directions  Color space partition according to human visual perception  Correspondence between low-level features and semantics: auto-annotation  Fusion of retrieval result sets  Adaptive search: color and texture fusion  Using relevance feedback

February, Fusion of retrieval result sets  How to merge fairly?  How to merge efficiently?  How to merge effectively? Fusion of weighted lists with ranked elements: (x 1 1, r 1 1 ), (x 1 2, r 1 2 ), …, (x 1 n, r 1 n ) ω1ω1 (x 2 1, r 2 1 ), (x 2 2, r 2 2 ), …, (x 2 k, r 2 n ) ω2ω2 (x m 1, r m 1 ), (x m 2, r m 2 ), …, (x m l, r m l ) ωmωm … ?

February,  Supplement fusion –union textual results (textual viewpoints )  Collage fusion –combine texture (texture viewpoint) & color results (color viewpoint) –different color methods (different color viewpoints) Ranked lists fusion: application area

February,  Search by textual query in partly annotated image database Ranked lists fusion: application area Textual query TextResult 1, textrank 1 TR 2, tr 2,... … tr 1 … tr 2 … by annotations content-based Result

February,  commutative property  associative property  value of result object's rank independent of another object's ranks Examples: COMBSUM, COMBMIN, COMBMAX merge functions Three main native fusion properties

February,  normalization & delimitation property  conic property  attraction of current object for mix result depend on value of function g(rank, weight) ≥ 0 ;  snare condition: Additional native fusion properties

February,  g monotonically decreases with fixed weight parameter  g monotonically decreases with fixed rank parameter  g must satisfy boundaries conditions:  g( 0, w ) > 0 if w != 0  g( r, 0 ) = 0 Conic properties, function g

February, Fusion formula where Ranked lists fusion: Formulas

February,  All lists are sorted by object id  Using step by step lists merging (object id priory)  If object_id1 not equal object_id2 => some object is absent in one of the lists Ranked lists fusion: Algorithm List 1 List 2 Result list Current object_id2 Current object_id1

February,  Viewpoint should provide some “valuable” information. Retrieval system's performance at least should be better than a random system.  Information is not fully duplicated. There should be partial disagreement among viewpoints. Ranked lists fusion: Experiments Necessary conditions:

February,  R overlap && N overlap conditions  Intercomparison of methods –Classical methods: COMBSUM, COMBMIN, COMBMAX –Probability methods: probFuse –Random method: random values that satisfied to merge properties. Ranked lists fusion: Experiments Parameters:

February, Research directions  Color space partition according to human visual perception  Correspondence between low-level features and semantics: auto-annotation  Fusion of retrieval result sets  Adaptive search: color and texture fusion  Using relevance feedback

February, Adaptive merge: color and texture Hypothesis: Optimal α depends on features of query Q. It is possible to distinguish common features for images that have the same “best” α. Dist(I, Q) = α *C(I, Q) + (1 - α )*Т(I, Q), C(I, Q) – color distance between I and Q; T(I, Q) – texture distance between I and Q; 0 ≤ α ≤ 1

February, Adaptive merge: experiments

February, Estimation tool  Web-application  Provides interfaces for developers of search- methods  Uses common measures to estimate search methods:  Precision  Pseudo-recall  Collects users opinions – > builds test database

February, Datasets  Own photo collection (~2000 images)  Subset from own photo collection (150 images)  Flickr collection (~15000, ~1.5 mln images)  Corel photoset (1100 images)

February, Research directions  Color space partition according to human visual perception  Correspondence between low-level features and semantics: auto-annotation  Fusion of retrieval result sets  Adaptive search: color and texture fusion  Using relevance feedback