Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Slides from: Doug Gray, David Poole
QR Code Recognition Based On Image Processing
NEURAL NETWORKS Perceptron
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Computer Vision Lecture 16: Region Representation
Bayesian Decision Theory
Each pixel is 0 or 1, background or foreground Image processing to
AIIA Lab, Department of Informatics Aristotle University of Thessaloniki Z.Theodosiou, F.Raimondo, M.E.Garefalaki, G.Karayannopoulou, K.Lyroudia, I.Pitas,
TERMS, CONCEPTS and DATA TYPES IN GIS Orhan Gündüz.
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Segmentation Divide the image into segments. Each segment:
Highlights Lecture on the image part (10) Automatic Perception 16
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Radial-Basis Function Networks
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Artificial Intelligence (AI) Addition to the lecture 11.
Computer vision.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Machine Vision for Robots
Image recognition using analysis of the frequency domain features 1.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Presented by Tienwei Tsai July, 2005
Perception Introduction Pattern Recognition Image Formation
Kristen Horstmann, Tessa Morris, and Lucia Ramirez Loyola Marymount University March 24, 2015 BIOL398-04: Biomathematical Modeling Lee, T. I., Rinaldi,
ENT 273 Object Recognition and Feature Detection Hema C.R.
Compiled By: Raj G Tiwari.  A pattern is an object, process or event that can be given a name.  A pattern class (or category) is a set of patterns sharing.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Quadratic Surfaces. SPLINE REPRESENTATIONS a spline is a flexible strip used to produce a smooth curve through a designated set of points. We.
Digital Image Processing CCS331 Relationships of Pixel 1.
Generalized Hough Transform
Image Processing and Pattern Recognition Jouko Lampinen.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Gene expression. The information encoded in a gene is converted into a protein  The genetic information is made available to the cell Phases of gene.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
CS654: Digital Image Analysis
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
1 Statistics & R, TiP, 2011/12 Neural Networks  Technique for discrimination & regression problems  More mathematical theoretical foundation  Works.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Image Segmentation Image segmentation (segmentace obrazu)
MDL Principle Applied to Dendrites and Spines Extraction in 3D Confocal Images 1. Introduction: Important aspects of cognitive function are correlated.
Computational Biology Group. Class prediction of tumor samples Supervised Clustering Detection of Subgroups in a Class.
Morphological Image Processing Robotics. 2/22/2016Introduction to Machine Vision Remember from Lecture 12: GRAY LEVEL THRESHOLDING Objects Set threshold.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Optical Character Recognition
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Chapter 12 Case Studies Part B. Control System Design.
Content Based Coding of Face Images
Big data classification using neural network
- photometric aspects of image formation gray level images
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Fitting Curve Models to Edges
Object Recognition Today we will move on to… April 12, 2018
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Fourier Transform of Boundaries
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Random Neural Network Texture Model
Presentation transcript:

Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects using thresholding, edge detection, region growing, template matching, mathematical morphology, etc. Object description –Determination of object attributes such as area, volume, perimeter, surface, boundary, roundness, etc. Object classification –Dividing the detected objects into several classes based on the object attributes. Image understanding –Making sense of the detected and classified objects – complex understanding of the image data.

Object Description Why object description? –After segmentation, objects need to be described in order to perform consequent classification phase. Usually those parameters which are needed for classification are computed. The basic parameters are: coordinates size (area or volume) perimeter or surface area mean or peak intensity boundary

Object Description Boundary –The boundary is usually represented as an encoded chain of points where only the first point’s absolute position is stored and then only directions from one point to another are stored. This is so called Freeman’s code (Freeman 1961). –Boundary has got its own properties which can be also calculated. For example, curvature can be computed which is defined as a fraction between number of boundary pixels where the boundary changes its direction significantly and the total number of boundary pixels.

Object Description A lot of other special object parameters can be computed such as: Center of mass (coordinate average weighted by intensity) Minimal bounding rectangle in 2D (or bounding box in 3D) (in 2D: a rectangle of minimal area that contains given object) (in 3D: a parallelepiped of minimal volume that contains given object) Elongatedness (A/(2d) 2 where A is object area and d is the number of erosion steps that must be applied before the object completely disappears) Direction of an elongated object (direction of the longer side of a minimum bounding rectangle or box) Circularity (=roundness) (4  A/P 2 where A is area and P perimeter of the object) Convex hull Skeleton etc.

Object Classification What is object classification? –The classification step tries to divide the objects detected during the segmentation step into several classes. –The classification is impossible without a priori knowledge: the properties of individual classes must be known beforehand. –The number of classes is also usually known beforehand - it is derived from the problem specification. –The objects are usually classified according to the object descriptions which are compared to the descriptions of individual classes. –An example of a classification task can be dividing cells into G 0, S, G 2 /M classes (cell cycle stages) according to their total DNA intensity parameter. In praxis, however, usually more parameters are taken into account.

Object Classification Two main approaches to the classification step: 1) Formal description is constructed. If formal description can be written, the classifier can be quite easily realized by means of an appropriate programming language. The formal description of more complicated classes is often written precisely by means of formal grammars (formal languages), predicate logic, production rules or other mathematical tools. 2) A classifier is trained on a set of examples. The computer learns step by step which input corresponds to which class. The most frequent approach to classification based on learning on a set of examples is neural networks approach.

Image Understanding What is image understanding? –Image understanding is the most complicated task and often requires interaction with the other phases of image analysis. Its aim is to make sense of the recognized and classified objects. –Sometimes, object classification is sufficient and no other image understanding is required. However, if we want to analyze objects in context with each other, the understanding phase is required. –The approaches used for this task are specific to each problem. –In cytometry (cell measurements), the task is usually only to measure and classify the individual cells and the cells are not treated in context with each other. Only during the final statistical evaluation of cell attributes (e.g. in a spreadsheet program) all cells are taken into account (e.g. it is found that 20% of cells are normal and 80% are aberrant).

Image Understanding Four main approaches to the understanding step: 1) Bottom-up control (control by the image data). Processing proceeds from the raster image to segmented image, to region (object) description, and to their classification and recognition of the scene. 2) Top-down control (model-based control). A set of assumptions and expected properties is constructed from a priori knowledge. The satisfaction of these properties is tested in image representations at different processing levels in a top-down direction, down to the original data. The image understanding is internal model verification, and the model is either accepted or rejected. 3) Combined control strategy. Bottom-up and top-down control mechanisms are combined in order to obtain more flexible and powerful vision control strategy. 4) Non-hierarchical control. The next action is chosen based on the actual state and acquired information about the solved problem.

Segmentation: Biological Applications Human genome visualization Principle: Selected genes and chromosomes within cell nuclei are visualized using short DNA-probes ( base pairs) which are complementary to the target gene or chromosome. Several probes are used for one gene, many probes are used for one chromosome. The probes are stained with a certain fluorescent dye (of certain color). Probes for one gene or one chromosome are stained with the same color. Different genes and chromosomes are stained with different colors. Input: Images of cells at different stages of the cell cycle (i.e. with different amount of DNA). The nuclear DNA is stained with a certain color called counterstain. In this way cell nuclei are visualized. Cells at different stages of the cell cycle have different intensities (proportional to their DNA content). Using various colors (different from the counterstain), genes or chromosomes within cell nuclei are visualized. Tasks:1) Find cell nuclei within the counterstain image. 2) Find genes (chromosomes) within individual color channels.

Segmentation: Biological Applications Segmentation of cells

Segmentation: Biological Applications Typical example of gene behaviour in a 3D image