References [1] DARWIN. Eckerd College. darwin.eckerd.edu [2] FinScan. Texas A&M University. [3J M. S. Prewitt and M.L. Mendelsohn, The analysis of cell.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Watermarking 3D Objects for Verification Boon-Lock Yeo Minerva M. Yeung.
From Images to Answers A Basic Understanding of Digital Imaging and Analysis.
Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.
Feature Detection and Outline Registration in Dorsal Fin Images A. S. Russell, K. R. Debure, Eckerd College, St. Petersburg, FL Most Prominent Notch analyze.
Introduction Researchers involved in the study of dolphin biology, such as migrations, ranging patterns and social association patterns, frequently use.
DIGITAL IMAGE PROCESSING
‘ Glaucoma Detection In Retinal Images Using Automated Method ’
Quadtrees, Octrees and their Applications in Digital Image Processing
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Automated Characterization of cellular migration phenomena Christian Beaudry, Michael E. Berens, Anna M. Joy Translational Genomics Research Institute.
Tour the World: building a web-scale landmark recognition engine ICCV 2009 Yan-Tao Zheng1, Ming Zhao2, Yang Song2, Hartwig Adam2 Ulrich Buddemeier2, Alessandro.
Combining Human and Machine Capabilities for Improved Accuracy and Speed in Visual Recognition Tasks Research Experiment Design Sprint: IVS Flower Recognition.
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
A Study of Approaches for Object Recognition
Understanding and Quantifying the Dancing Behavior of Stem Cells Before Attachment Clinton Y. Jung 1 and Dr. Bir Bhanu 2, Department of Electrical Engineering.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
Quadtrees, Octrees and their Applications in Digital Image Processing
Computer Vision Basics Image Terminology Binary Operations Filtering Edge Operators.
Computer-Aided Diagnosis and Display Lab Department of Radiology, Chapel Hill UNC Julien Jomier, Erwann Rault, and Stephen R. Aylward Computer.
Data Input How do I transfer the paper map data and attribute data to a format that is usable by the GIS software? Data input involves both locational.
Redaction: redaction: PANAKOS ANDREAS. An Interactive Tool for Color Segmentation. An Interactive Tool for Color Segmentation. What is color segmentation?
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
1 Real Time, Online Detection of Abandoned Objects in Public Areas Proceedings of the 2006 IEEE International Conference on Robotics and Automation Authors.
Spectral contrast enhancement
Identifying Computer Graphics Using HSV Model And Statistical Moments Of Characteristic Functions Xiao Cai, Yuewen Wang.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
New Segmentation Methods Advisor : 丁建均 Jian-Jiun Ding Presenter : 蔡佳豪 Chia-Hao Tsai Date: Digital Image and Signal Processing Lab Graduate Institute.
OVERVIEW- What is GIS? A geographic information system (GIS) integrates hardware, software, and data for capturing, managing, analyzing, and displaying.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
Steps for Extraction Creation of cyan intensity image in preparation for thresholding: Although many dorsal fin images exhibit a good contrast between.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Vehicle License Plate Detection Algorithm Based on Statistical Characteristics in HSI Color Model Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh.
DARWIN - Dolphin Photo-identification Software Adaptations to Digital Camera Acquisition and Increased Matching Accuracy K. R. Debure, J. H. Stewman, S.
Abstract DARWIN ( is a computer program developed to facilitate the identification of individual dolphins from photographs of.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
March 10, Iris Recognition Instructor: Natalia Schmid BIOM 426: Biometrics Systems.
A Multi-Spectral Structured Light 3D Imaging System MATTHEW BEARDMORE MATTHEW BOWEN.
Quadtrees, Octrees and their Applications in Digital Image Processing.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
The Extraction and Classification of Craquelure for Geographical and Conditional Based Analysis of Pictorial Art Mouhanned El-Youssef, Spike Bucklow, Roman.
Digital Image Processing - (monsoon 2003) FINAL PROJECT REPORT Project Members Sanyam Sharma Sunil Mohan Ranta Group No FINGERPRINT.
Digital Image Processing Lecture 1: Introduction February 21, 2005 Prof. Charlene Tsai Prof. Charlene Tsai
Computer-based identification and tracking of Antarctic icebergs in SAR images Department of Geography, University of Sheffield, 2004 Computer-based identification.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
1 Machine Vision. 2 VISION the most powerful sense.
Barcodes, MMS, and the Internet’s Cheapest Prices Greg McGrath & Greg Maier Advisors: Professor Cotter, Professor Rudko ECE-499 March 01, 2008.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Chapter 6 Skeleton & Morphological Operation. Image Processing for Pattern Recognition Feature Extraction Acquisition Preprocessing Classification Post.
TUMOR BURDEN ANALYSIS ON CT BY AUTOMATED LIVER AND TUMOR SEGMENTATION RAMSHEEJA.RR Roll : No 19 Guide SREERAJ.R ( Head Of Department, CSE)
An Introduction to Digital Image Processing Dr.Amnach Khawne Department of Computer Engineering, KMITL.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
License Plate Recognition of A Vehicle using MATLAB
1 INTRODUCTION TO COMPUTER GRAPHICS. Computer Graphics The computer is an information processing machine. It is a tool for storing, manipulating and correlating.
Optical Character Recognition
A. M. R. R. Bandara & L. Ranathunga
IMAGE SEGMENTATION USING THRESHOLDING
DIGITAL SIGNAL PROCESSING
Seunghui Cha1, Wookhyun Kim1
Computer Vision Lecture 12: Image Segmentation II
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Analysis and classification of images based on focus
Ceng466 Fundamentals of Image Processing
Presentation transcript:

References [1] DARWIN. Eckerd College. darwin.eckerd.edu [2] FinScan. Texas A&M University. [3J M. S. Prewitt and M.L. Mendelsohn, The analysis of cell images, in Ann. New York Acad. Sci. Vol 1287, pp , New York Acad. Sci., New York, [4] P.K. Sahoo, et al., A Survey of Thresholding Techniques, Computer Vision, Graphics, and Image Processing , [5] W. Doyle, Operation useful for similarity-invariant pattern recognition, J. Assoc. Comput. March 9, 1962, [6] A. Rosenfeld and P. De La Torre, Histogram concavity analysis as an aid in threshold selection, IEEE Trans. Systems Man Cybernet. SMC-13, 1983, Unsupervised Thresholding and Morphological Processing for Automatic Fin-outline Extraction in DARWIN (Digital Analysis and Recognition of Whale Images on a Network) S. A. Hale and K. R. Debure, Eckerd College, St. Petersburg, FL Abstract At least two software packages exist to facilitate the identification of cetaceans—whales, dolphins, porpoises—based upon the naturally occurring features along the edges of their dorsal fins. Both packages, however, require the user to manually trace the dorsal fin outline to provide an initial position. This process is time consuming and visually fatiguing. This research aims to provide an automated method (employing unsupervised thresholding techniques and image manipulation methods) to extract a cetacean dorsal fin outline from a digital photo so that manual user input is reduced. Ideally, the automated outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Steps for Extraction The algorithm involves three primary stages—constructing a binary image, refining the binary image, and forming and using the outline yielded by the refined binary image. The algorithm begins when the user imports a photograph of an unknown fin to compare to existing, known fins. In the first stage, the color image is converted to a grayscale image (Fig. 1b) (and downsampled for performance reasons) so that each pixel's intensity may be analyzed. A histogram of the grayscale image is constructed and analyzed using the mode method [3] to find the darkest homogeneous region. (Other thresholding methods [4]—including the p-tile [5] method and Histogram concavity analysis method [6] were analyzed in the research process but found to have comparable results to the mode method at greater cost.) The algorithm constructs a binary image (Fig. 1c) in the second stage by thresholding the grayscale image at the end point of the region discovered in stage one. Morphological processes are applied to this binary image in the third stage to produce a cleaner outline (Fig. 1d): often, thresholding will select dark portions of waves or shadows as part of the fin. The image is iteratively opened—erosion followed by dilation. The largest region of black pixels is then selected in the fourth stage (Fig. 1e). The outline is formed in stage five by eroding a copy of the image once and performing an exclusive or with the unchanged original binary image. This results in a one pixel outline (Fig. 1f). The largest outline is then selected (Fig. 1g) as smaller outlines often result from sun-glare spots on the fin and other abnormalities. In the next stage, the start and end of the fin outline is detected and walked to form a chain of coordinate pairs. In the final stage (also performed for manually-traced fins), the algorithm plots the points on the original color image and spaces/smoothes the outline using active contours to reposition the outline along the true dorsal fin edge. (Fig. 1h).Conclusion The algorithm recognized its failure to provide a usable outline for 78 images in the test set of 302. The majority of these images were images of low contrast (See Fig. 4 for an example). The next research phase focuses on methods to utilize color information in the formation of an alternate intensity grayscale image. For example, if the postulate that sea water has a higher intensity in the blue-green channels than the fin proves true, the region with the least amount of blue-green should be the fin. If so, there exist some weights that when applied to the RGB channels would produce an intensity image with a higher contrast between water and fin. Overall, the automated fin recognition should facilitate use of automated recognition packages. In the event of failing to identity the outline, a user traces the outline as usual at no loss in time; while, in the case of a successful extraction of an outline, the user proceeds directly to matching the fin and can bypass the time-consuming and visually-fatiguing manual tracing process. Introduction As cetaceans—whales, dolphins, porpoises—can be uniquely identified by the features of their dorsal fins, researchers frequently employ photo-identification techniques in studies of population, migration, social interaction, etc. Researchers construct a catalog of known individuals and classify fins based on the location of primary identifying damage features. The subsequent comparison of unknown fins to this catalog is time consuming and tedious. With the pervasive use of digital cameras in such research, the quantity of photographic data is increasing and the backlog of individual identification delays meaningful data analysis. Thus, automated fin comparison can potentially improve productivity. Automated methods to compare dorsal fin photographs exist and significantly reduce the number of photographs which must be examined manually to correctly identify an individual. However, these methods often require more work on the end user's part than simply looking through a catalog of photos. Tracing a fin online accounts for a large portion of the time required for data input. Not only is the process time consuming, but also the nature of the task causes visual fatigue. In order to make these packages more usable and increase the end user experience, a process to automatically extract a fin outline from a photograph is desirable. Figure 4: A example of a fin which failed to produce a usable outline. As can be seen, the grayscale image does not facilitate segmentation of the image into fin and water. Figure 1: Intermediate steps for an automatic extraction of a fin's outline from a digital photo. (a) Color image, (b) Grayscale Image, (c) Result of unsupervised thresholding based on histogram analysis, (d) A series of morphological processes [open (erosion/dilation), erosion, AND with original binary image c], (e) Feature recognition to select the largest feature, (f) Outline produced by one erosion and XOR with e, (g) Feature recognition to select largest outline, (h) outline spaced/smoothed using active contours. Acknowledgements This research was supported by the National Science Foundation under grant number DBI Testing The fin extraction code was developed using a set of 36 images collected from field research. The test set of images included 302 images distinct from the 36 in the development set. These 302 images were collected as part of an active population study of bottlenose dolphins (Tursiops truncatus) in Boca Ciega Bay, FL. In the set, the program automatically produced a usable outline for 106 images (35.1%). Another 100 images (33.11%) required only slight modification to the automatically extracted fin outline. The autotrace algorithm was unable to produce an outline for 78 images (25.83%). Finally, the algorithm produced an unusable outline (usually selecting the wrong feature) for 18 images (5.96%). Figure 3: The results of a test set of 302 never- seen- before images. Figure 2: Examples of automatically extracted fin outlines. (a)(b)(c)(d)(e)(f)(g)(h)