SEMANTIC FEATURE ANALYSIS IN RASTER MAPS Trevor Linton, University of Utah.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

QR Code Recognition Based On Image Processing
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Evaluation of segmentation. Example Reference standard & segmentation.
Data Mining Cluster Analysis: Advanced Concepts and Algorithms
Prénom Nom Document Analysis: Document Image Processing Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
Data preprocessing before classification In Kennedy et al.: “Solving data mining problems”
Chapter 4: Image Enhancement
Thesis title: “Studies in Pattern Classification – Biological Modeling, Uncertainty Reasoning, and Statistical Learning” 3 parts: (1)Handwritten Digit.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Segmentation into Planar Patches for Recovery of Unmodeled Objects Kok-Lim Low COMP Computer Vision 4/26/2000.
Lecture 5 Hough transform and RANSAC
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 9 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Segmentation Divide the image into segments. Each segment:
FYP Presentataion CK1 Intelligent Surface Modeler By Yu Wing TAI Kam Lun TANG Advised by Prof. Chi Keung TANG.
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Stockman MSU/CSE Fall 2009 Finding region boundaries.
Presented by Zeehasham Rasheed
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
CHAMELEON : A Hierarchical Clustering Algorithm Using Dynamic Modeling
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Multi-resolution Arc Segmentation: Algorithms and Performance Evaluation Jiqiang Song Jan. 12 th, 2004.
Preparing Data for Analysis and Analyzing Spatial Data/ Geoprocessing Class 11 GISG 110.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
HOUGH TRANSFORM Presentation by Sumit Tandon
1 CSE 980: Data Mining Lecture 17: Density-based and Other Clustering Algorithms.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
Compression and Analysis of Very Large Imagery Data Sets Using Spatial Statistics James A. Shine George Mason University and US Army Topographic Engineering.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
NR 143 Study Overview: part 1 By Austin Troy University of Vermont Using GIS-- Introduction to GIS.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Radiometric Normalization Spring 2009 Ben-Gurion University of the Negev.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
1 Latent Concepts and the Number Orthogonal Factors in Latent Semantic Analysis Georges Dupret
Image Registration Advanced DIP Project
A Multiresolution Symbolic Representation of Time Series Vasileios Megalooikonomou Qiang Wang Guo Li Christos Faloutsos Presented by Rui Li.
CSE 185 Introduction to Computer Vision Feature Matching.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
A New Method for Crater Detection Heather Dunlop November 2, 2006.
Definition of Spatial Analysis
Wire Detection Version 2 Joshua Candamo Friday, February 29, 2008.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
Detecting Image Features: Corner. Corners Given an image, denote the image gradient. C is symmetric with two positive eigenvalues. The eigenvalues give.
Statistical Concepts Basic Principles An Overview of Today’s Class What: Inductive inference on characterizing a population Why : How will doing this allow.
Fitting: Voting and the Hough Transform
Recognizing Deformable Shapes
Fitting Curve Models to Edges
Outline Announcement Texture modeling - continued Some remarks
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Spatial Data Entry via Digitizing
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

SEMANTIC FEATURE ANALYSIS IN RASTER MAPS Trevor Linton, University of Utah

Acknowledgements  Thomas Henderson  Ross Whitaker  Tolga Tasdizen  The support of IAVO Research, Inc. through contract FA C-005.

Field of Study  Geographical Information Systems  Part of Document Recognition and Registration.  What are USGS Maps?  A set of 55,000 – 1:24,000 scale images of the U.S. with a wealth of data.  Why study it?  To extract new information (features) from USGS maps and register information with existing G.I.S and satellite/aerial imagery.

Problems  Degradation and scanning produces noise.  Overlapping features cause gaps.  Metadata has the same texture as features.  Closely grouped features makes discerning between features difficult.

Problems – Noisy Data Scanning artifact which introduces noise

Problems – Overlapping Features Metadata and Features overlap with similar textures. Gaps in data.

Problems – Closely Grouped Features Closely grouped features make discerning features difficult.

Thesis & Goals  Using Gestalt principles to extract features and overcome some of the problems described.  Quantitatively extract 95% recall and 95% precision for intersections.  Quantitatively extract 99% recall and 90% precision for intersections.  Current best method produces 75% recall and 84% precision for intersections.

Approach  Gestalt Principles  Organizes perception, useful for extracting features.  Law of Similarity  Law of Proximity  Law of Continuity

Approach – Gestalt Principles  Law of Similarity  Grouping of similar elements into whole features.  Reinforced with histogram models.

Approach – Gestalt Principles  Law of Proximity  Spatial proximity of elements groups them together.  Reinforced through Tensor Voting System

Approach – Gestalt Principles  Law of Continuity  Features with small gaps should be viewed as continuous.  Idea of multiple layers of features that overlap.  Reinforced by Tensor Voting System.

Approach – Framework Overview

Pre-Processing  Class Conditional Density Classifier  Uses statistical means and histogram models.  μ = Histogram model vector.  Find class k with the smallest δ is the class of x.

Pre-Processing  k-Nearest Neighbors  Uses the class that is found most often out of k closest neighbors in the histogram model.  Closeness is defined by Euclidian distance of the histogram models.

Pre-Processing  Knowledge Based Classifier  Uses logic that is based on our knowledge of the problem to determine classes.  Based on information on the textures each class has.

Pre-Processing  Original Image with Features Estimated

Pre-Processing  Original Image with Roads Extracted Class condition classifier k-Nearest Neighbors Knowledge Based

Tensor Voting System  Overview

Tensor Voting System  Uses an idea of “Voting”  Each point in the image is a tensor.  Each point votes how other points should be oriented.  Uses tensors as mathematical representations of points.  Tensors describe the direction of the curve.  Tensors represent confidence that the point is a curve or junction.  Tensors describe a saliency of whether the feature (whether curve or junction) actually exists.

Tensor Voting System  What is a tensor?  Two vectors that are orthogonal to one another packed into a 2x2 matrix.

Tensor Voting System  Creating estimates of tensors from input tokens.  Principal Component Analysis  Canny edge detection  Ball Voting

Tensor Voting System  Voting  For each tensor in the sparse field  Create a voting field based on the sigma parameter.  Align the voting field to the direction of the tensor.  Add the voting field to the sparse field.  Produces a dense voting field.

Tensor Voting System  Voting Fields  A window size is calculated from  Direction of each tensor in the field is calculated from  Attenuation derived from

Tensor Voting System  Voting Fields (Attenuation)  Red and yellow are higher votes, blue and turquoise lower.  Shape related to continuation vs. proximity.

Tensor Voting System  Extracting features from dense voting field.  determines the likelihood of being on a curve.  determines the likelihood of being a junction.  If both λ 1 and λ 2 are small then the curve or junction has a small amount of confidence in existing or being relevant.

Tensor Voting System  Extracting features from dense voting field.  Original Image Curve Map Junction Map

Post-processing  Extracting features from curve map and junction map.  Global Threshold and Thinning  Local Threshold and Thinning  Local Normal Maximum  Knowledge Based Approach

Post-processing  Global threshold on curve map. Applied Threshold Thinned Image

Post-processing  Local threshold on curve map. Applied Threshold Thinned Image

Post-processing  Local Normal Maximum  Looks for maximum over the normal of the tensor at each point. Applied Threshold Thinned Image

Post-processing  Knowledge Based Approach  Uses knowledge of types of artifacts of the local threshold to clean and prep the image. Original Image Knowledge Based Approach

Experiments  Determine adequate parameters.  Identify weaknesses and strengths of each method.  Determine best performing methods.  Quantify the contributions of tensor voting.  Characterize distortion of methods on perfect inputs.  Determine the impact of misclassification of text on roads.

Experiments  Quantitative analysis done with recall and precision measurements.  Relevant is the set of all features that are in the ground truth.  Retrieved is the set of is all features found by the system.  tp = True Positive, fn = False Negative, fp = False Positive  Recall measures the systems capability to find features.  Precision characterizes whether it was able to find only those features.  For both recall and precision, 100% is best, 0% is worst.

Experiments  Data Selection  Data set must be large enough to adequately represent features (above or equal to 100 samples).  One sub-image of the data must not be biased by the selector.  One sub-image may not overlap another.  A sub-image may not be a portion of the map which contains borders, margins or the legend.

Experiments  Ground Truth  Manually generated from samples.  Roads and intersections manually identified.  Ground Truth is generated twice, those with more than 5% of a difference are re-examined for accuracy. Ground truth Original Image

Experiments  Best Pre-Processing Method  All pre-processing methods examined without tensor voting or post processing for effectiveness.  Best window size parameter for k-Nearest Neighbors was qualitatively found to be 3x3.  The best k parameter for k-Nearest Neighbors was quantitatively found to be 10.  The best pre-processing method found was the Knowledge Based Classifier

Experiments  Tensor Voting System  Results from test show the best value for σ is between 10 and 16 with little difference in performance.

Experiments  Tensor Voting System  Contributions from tensor voting were mixed.  Thresholding methods performed worse.  Knowledge based method improved 10% road recall, road precision dropped by 2%, intersection recall increased by 22% and intersection precision increased by 20%.

Experiments  Best Post-Processing  Finding the best window size for local thresholding.  Best parameter was found between 10 and 14.

Experiments  Best Post-Processing  The best post-processing method was found by using a naïve pre-processing technique and tensor voting.  Knowledge Based Approach performed the best.

Experiments  Running the system on perfect data (ground truth as inputs) produced higher results then any other method (as expected).  Thesholding had a considerably low intersection precision due to artifacts produced in the process.

Experiments  Best combination found was k-Nearest Neighbors with a Knowledge Based Approach.  Note the best pre-processing method Knowledge Based Classifier was not the best pre-processing method when used in combinations due to the type of noise it produces.  With Text:  92% Road Recall, 95% Road Precision  82% Intersection Recall, 80% Intersection Precision  Without Text:  94% Road Recall, 95% Road Precision  83% Intersection Recall, 80% Intersection Precision

Experiments  Confidence Intervals (95% CI, 100 samples)  Road Recall:  Mean: 93.61% CI [ 92.47%, 94.75% ] ± 0.14%  Road Precision:  Mean: 95.23% CI [ 94.13%, 96.33% ] ± 0.10%  Intersection Recall:  Mean: 82.22% CI [ 78.91%, 85.51% ] ± 3.29%  Intersection Precision:  Mean: 80.1% CI [ 76.31%, 82.99% ] ± 2.89%

Experiments  Adjusting parameters dynamically  Dynamically adjusting the σ between 4 and 10 by looking at the amount of features in a window did not produce much difference in the recall and precision (less than 1%).  Dynamically adjusting the c parameter in tensor voting actually produced worse results because of exaggerations in the curve map due to slight variations in the tangents for each tensor.

Future Work & Issues  Tensor Voting and thinning tend to bring together intersections too soon when the road intersection angle was too low or the roads were too thick.  The Hough transform may possibly overcome this issue.

Future Work & Issues  Scanning noise will need to be removed in order to produce high intersection recall and precision results.

Future Work & Issues  Closely grouped and overlapping features.

Future Work & Issues  Developing other pre-processing and post-processing techniques.  Learning algorithms  Various local threshold algorithms  Road following algorithms