Download presentation
Presentation is loading. Please wait.
Published byAubrey Tucker Modified over 9 years ago
1
A Quantitative Analysis and Performance Study For Similar- Search Methods In High- Dimensional Space Presented By Umang Shah Koushik
2
Introduction Sequential Scan always out perform whenever the dimension is greater then 10 or higher. Any method of clustering or data space partition method fail to handle HDVS beyond a certain limit. VA files is proposed to do the inevitable sequential scan more efficiently. Performance increases with dimensions.
3
Assumptions and Notation Assumption 1-Data and Metric Unit hypercube Distances Assumption 2-Uniformity and Independence Data and query points are uniformly distributed Dimensions are independent.
4
NN, NN-distance, NN-sphere
5
Probability and Volume Computations
6
The Difficulties of High Dimensionality Number of partitions. Data space is sparsely populated Spherical range queries Exponentially growing DB size Expected NN-Distance.
7
Number of partitions 2 d partitions Assume N = 10 6 points. For d = 100, there are 2 100 ≈ 10 30 partitions. Too many partitions are empty.
8
Data space is sparsely populated 0.95^100 = 0.0059 At d = 100, even a hypercube of side 0.95 can cover only 0.59% of the data space.
9
Spherical range queries The largest spherical query.
10
Exponentially growing DB size At least one point falls into the largest possible sphere.
11
Expected NN-Distance The NN distance grows steadily with d.
12
General Cost Model The Probability that the ith block is visited, Expected number of blocks visited If we assume m objects per block, Is Mvisit > 20%?
13
Space-Partitioning Methods is independent of d. Space consumption – 2 d.So split is done in d’ dimensions only. E [nn dist ] increases with d When E [nn dist ] is greater than l max the entire database is accessed.
14
Data-Partitioning Methods Rectengular MBRS R* tree,X tree,SR tree Spherical MBRS TV tree,M tree,SR tree General partitioning and Clustering schemes
15
Rectangular MBRs
16
Spherical MBRS
17
General Partitioning and Clustering Schemes Assumptions A cluster is characterized by a geometrical form (MBR) that covers all cluster points Each cluster contains at least 2 points The MBR of a cluster is convex.
18
Vector Approximation File Basic Idea: Technique Specially Designed For Similarity Search Object Approximation Vector Data Compression
19
Notations
20
Lower bound,upper bound
21
How it is done The data is divided in to 2^b rectangular cells Cells are arranged in form of grid Entire file is scanned at the time of query
22
Compression Vector For each dimension a small number bits b [i] is assigned. The sum b[i] is b The data space is divided in 2^d hyper rectangles Each data point is approximated by the bit string of the cell Only the boundary points of each data set needs to be stored
23
Compression Vector Normally bits chosen for each dimension vary from 4 to 8 Typically bi = l, b = d *l, l = 4...8
24
Example:
25
Two probability associated with the VA files
26
Filtering Step Simple Search Algorithm An Array of k elements is maintained This array is maintained in sorted order File is sequentially searched. If the element’s lower bound < k th element upper bound The actual distance are calculated
27
Filtering Step Near Optimal search algorithm Done in two steps While scanning through the file Step1-Calculate the kth largest upper bound Encountered so far If new element has lower bound greater then then discard it
28
Filtering Step Step2-The elements remaining in step1 are collected The elements in increasing order of lower bound are visited till it is >= to the kth element upper bound
29
Performance Add Two Graphs Of Performance
30
Performance
31
Conclusion All approaches to nearest-neighbor search in HDVSs ultimately become linear at high dimensionality. The VA-File method can out-perform any other method known to the authors.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.