Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ying Dai Faculty of software and information science,

Similar presentations


Presentation on theme: "Ying Dai Faculty of software and information science,"— Presentation transcript:

1 Representing Meanings of Images Based on Associative Values with Lexicons
Ying Dai Faculty of software and information science, Iwate Pref. University 2019/2/25

2 Outline Background Lexicon-based image/key-frame representation
Semantic tolerance relation model Associative values with pre-defined classes Semantic Relevance (SR) Visual similarity (VS) Generating associative values Bidirectional associative memories Image/key-frame retrieval Experiments and analysis 2019/2/25

3 Background It is hard to annotate meanings of images because of
The different interpretation by the different individuals Diversity, ambiguousness, dependence on context The issues in image automatic annotation Defining domains together with related lexicons Specifying training sample set representing the meanings of lexicons Generating association of images with the defining lexicons This paper focus on Handling the above issues Analyzing the experiment results 2019/2/25

4 Semantic tolerance relation model
 image are described by different domains, such as impression, nature vs. man-made, human vs. non-human, etc  For a domain, some lexicons are defined which are intra-tolerated Lexicons in different domains can be inter-tolerated because the concepts of image/video in many domains are imprecise, and the interpretation of finding similar image/video is ambiguous and subjective on the level of human perception, we define the semantic categories of image and key-frame, together with the tolerance degrees between them. Images are described by different domains. For a certain domain, concepts are divided into some classes. The class may be associated with the other in a same domain. The relation between them is called as intra-association. Also, the class may be associated with the other in the different domains. The relation between them is called as inter-association. 2019/2/25

5 Semantic tolerance relation model
The rate of one lexicon overlapped by another is defined as the tolerance degree of c1 to c2, denoted as td(c1,c2) If c1 and c2 are synonymy, td(c1,c2) =1; If c1 and c2 are antonymy, td(c1,c2)=0; If c1 is part of c2, td(c1,c2)=1, td(c2,c1)=0.5 If c1 is type of c2, td(c1,c2)=1, td(c2,c1)=0.5 Semantic tolerance relation model 2019/2/25

6 Representing meanings of images
Being represented by a vector of associative values of images with defined lexicons associative value affected by two facts Semantic relevance (SR) Visual similarity (VS) The weight sum of SR and VS 2019/2/25

7 Associative value-based image retrieval
For the image categorization regarding single domain For the image categorization regarding cross-domains For the image categorization regarding single domain , images are grouped to a class i , when the associative values of these images with class i are larger than a clustering threshold , For the image categorization regarding cross-domains, the grouped images are the intersection of those which either belong to the class i in domain k, or belong to the class j in domain l. 2019/2/25

8 Calculating associative values
Bidirectional associative memories (BAM) (by B.Kosko) Associating two patterns (X,Y) such that when one is encountered, the other can be recalled. Storing the associated pattern pairs by connection weight matrix 1 W11 Wmn Y X1 Xm 2019/2/25

9 Calculating associative values
Bidirectional associative memories Units’ input and output in the X layer and Y layer The energy of the BAM decreases or remains the same after each unit update. BAM eventually converge to a local minimum that corresponds to a stored associated pattern pair. 2019/2/25

10 Calculating associative values
Defining lexicons regarding the domain For nature vs. man-made domain, 30 lexicons are defined storing image patterns in X layer Specifying more than 30 images representing the meanings of 30 lexicons For some lexicons, more than one images are specified Storing these images as learned patterns of 30 lexicons Some learned image patterns Tree (P2) 2019/2/25 Building Restaurant landscape tree (P1)

11 Calculating associative values
Stored patterns of Y layer Lexicon i is encoded by Units’ output of Y layer for learned pattern i recall value of the learned pattern i to defined lexicons Stored patterns of Y layer are the encoding units of classes with I components, if I classes are pre-defined. Yi=[0…010…0] is the encoding unit of class i : the recall value of learned pattern i to class i 2019/2/25

12 Calculating associative values
Units’ input and output in Y layer for an input image n Input : weighted sum of each pixel values of the inputted image Output: recall values of the inputted image VS of image n to class i regarding dimension k : recall value of input image n to class i 2019/2/25

13 Calculating associative values
SR is calculated by the following rules Determining what lexicon the image n mostly belongs to Generating the value of SR 2019/2/25

14 Calculating associative values
BAM-based associative values 2019/2/25

15 Examples of image retrieval
Some retrieved images of interior with human based on the associative values with these two lexicons 2019/2/25

16 Performance evaluation
Accuracy of retrieving images Test images set 1000 images randomly selected from the personal album 1000 key-frames randomly selected from the Video Traxx Accuracy of varying number of learned patterns 2019/2/25

17 Performance evaluation
Accuracy of varying weights of SR and VS furniture building 2019/2/25

18 Performance evaluation
Accuracy by multi-lexicons query Mountain with building 2019/2/25

19 Conclusion The meanings of images can be represented by the associative values with defined lexicons Considering SR with VS in generating associative values improves the accuracy of image retrieval Accuracy is higher by weighting VS Associative values can be calculated by BAM Adding new learning pattern pairs does not affect the accuracy of image retrieval so far within the capacity of BAM 2019/2/25

20 Future work The influence of different learning pattern images’ selection on the accuracy of image retrieval the influence of other types of learned patterns, such as the feature vector of color, shape and texture, on the accuracy of image retrieval 2019/2/25


Download ppt "Ying Dai Faculty of software and information science,"

Similar presentations


Ads by Google