A New Method for Crater Detection Heather Dunlop November 2, 2006.

Slides:



Advertisements
Similar presentations
Edge Detection Selim Aksoy Department of Computer Engineering Bilkent University
Advertisements

Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Recognizing Surfaces using Three-Dimensional Textons Thomas Leung and Jitendra Malik Computer Science Division University of California at Berkeley.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Computer Vision Lecture 16: Texture
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Coin Counter Andres Uribe. what Find out the amount of money in a coin picture.
Segmentation (2): edge detection
Edge Detection CSE P 576 Larry Zitnick
SEMANTIC FEATURE ANALYSIS IN RASTER MAPS Trevor Linton, University of Utah.
IEEE TCSVT 2011 Wonjun Kim Chanho Jung Changick Kim
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
3-D Depth Reconstruction from a Single Still Image 何開暘
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues David R. Martin Charless C. Fowlkes Jitendra Malik.
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Front-end computations in human vision Jitendra Malik U.C. Berkeley References: DeValois & DeValois,Hubel, Palmer, Spillman &Werner, Wandell Jitendra Malik.
Multi-Class Object Recognition Using Shared SIFT Features
Berkeley Vision GroupNIPS Vancouver Learning to Detect Natural Image Boundaries Using Local Brightness,
1 The Ecological Statistics of Grouping by Similarity Charless Fowlkes, David Martin, Jitendra Malik Computer Science Division University of California.
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Fitting a Model to Data Reading: 15.1,
Stockman MSU/CSE Fall 2009 Finding region boundaries.
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
1 Ecological Statistics and Perceptual Organization Charless Fowlkes work with David Martin and Jitendra Malik at University of California at Berkeley.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Heather Dunlop : Advanced Perception January 25, 2006
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Image Processing Edge detection Filtering: Noise suppresion.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Recognition II Ali Farhadi. We have talked about Nearest Neighbor Naïve Bayes Logistic Regression Boosting.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
CSE 185 Introduction to Computer Vision Edges. Scale space Reading: Chapter 3 of S.
1 Eye Detection in Images Introduction To Computational and biological Vision Lecturer : Ohad Ben Shahar Written by : Itai Bechor.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
CS654: Digital Image Analysis
Image Segmentation Image segmentation (segmentace obrazu)
A Statistical Approach to Texture Classification Nicholas Chan Heather Dunlop Project Dec. 14, 2005.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
REU Week 1 Presented by Christina Peterson. Edge Detection Sobel ◦ Convolve image with derivative masks:  x:  y: ◦ Calculate gradient magnitude ◦ Apply.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
Signal and Image Processing Lab
Cascade for Fast Detection
CS262: Computer Vision Lect 09: SIFT Descriptors
Chapter 10 Image Segmentation
Fitting: Voting and the Hough Transform
Data Driven Attributes for Action Detection
Detection of discontinuity using
Lit part of blue dress and shadowed part of white dress are the same color
Image Primitives and Correspondence
A Tutorial on HOG Human Detection
Outline Announcement Texture modeling - continued Some remarks
Computer Vision Lecture 16: Texture II
From a presentation by Jimmy Huff Modified by Josiah Yoder
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Grouping/Segmentation
Edge Detection Today’s readings Cipolla and Gee Watt,
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

A New Method for Crater Detection Heather Dunlop November 2, 2006

Introduction ● Purpose: – Detect as many craters as possible – With as high an accuracy as possible

System Overview ● Compute probability of a boundary image ● Use Hough Transform to detect circles as candidate craters ● Compute a set of features on each candidate ● Apply SVM classifier to identify craters vs. non- craters

Boundary Image ● Canny Sobel Boundary

Probability of a Boundary ● Natural image boundary detection – Martin, Fowlkes, Malik, UC Berkeley ● Brightness, texture gradients ● Half-disc regions described by histograms ● Compare distributions with χ 2 statistic ● Combine cues to form probability of a boundary image  r (x,y)

Hough Transform ● For lines: – “There are an infinite number of potential lines that pass through any point, each at a different orientation. The purpose of the transform is to determine which of these theoretical lines pass through most features in an image.” -- wikipedia.org ● For circles: – Parameterize by circle center (x,y) and radius r – Each edge point votes for possible circles by incrementing bin in accumulator matrix – Circles with the most votes win

Detect Circles ● Threshold boundary image and apply Hough Transform

Region Features ● Features that can distinguish crater from non-crater regions ● Shading ● Intensity ● Texture ● Template ● Boundary ● Radius ● Lighting: azimuth angle, angle of incidence

Shading Features ● Mostly applicable to day images ● Linear gradient due to directional lighting ● Compute best fit linear gradient ● Features: – direction of gradient – strength of gradient – SSE to gradient

Crater Regions ● InsideRimOutsideWhole ● Compare regions with ● Euclidean distance or χ 2 statistic r δ

Intensity Features ● Mean intensity ● Histogram of intensities

Texture ● MR8 Filter bank: Varma, Zisserman – Edges – Bars – Spots – Multiple orientations and scales ● Convolve images with set of filters ● Aggregate responses ● Cluster with k-means to form textons

Texton Maps ● Compute nearest texton for each image pixel's response vector ● Form texton map for image

Texture Features ● Histogram of textons in region

Template Features ● Mostly applicable to night images ● Crater sort of looks like this: ● Sum element-wise multiplication with image and normalize by size

Boundary Features ● Sum probability of a boundary in rim normalized by area of rim

Support Vector Machines ● Linear SVM: linear separator that maximizes the margin ● For non-linearly separable data: /slides/svm_with_annotations.p df

Crater vs. Non-Crater Classifier ● Train an SVM classifier using features extracted ● Training data: – ground truth craters – Hough detected circles that are not craters ● On test image, apply classifier to candidate craters to determine probability that each is a crater

Experiments ● 8 day images, 8 night images ● 820 craters, approx. 50 per image ● Each crater 4 pixels or larger in radius marked as ground truth ● Looking for craters of minimum radius 5 pixels ● Leave-out-one-image cross validation

Results: Day Legend: False positive Detected true positive Ground truth for true positive Not detected

Results: Night Legend: False positive Detected true positive Ground truth for true positive Not detected

False Detections Legend: False positive Detected true positive Ground truth for true positive Not detected

Performance Metrics ● Precision: fraction of detections that are true positives rather than false positives ● Recall: fraction of true positives that are detected rather than missed

Results

Conclusions ● Works better on day images than night ● The more training data the better ● Questions, comments, suggestions?