Multi-Sensor Image Fusion (MSIF) Team Members: Phu Kieu, Keenan Knaur Faculty Advisor: Dr. Eun-Young (Elaine) Kang Northrop Grumman Liaison: Richard Gilmore.

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
GATE Reconstruction from Point Cloud (GATE-540) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies & General Manager.
Dynamic Occlusion Analysis in Optical Flow Fields
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
Automatically Annotating and Integrating Spatial Datasets Chieng-Chien Chen, Snehal Thakkar, Crail Knoblock, Cyrus Shahabi Department of Computer Science.
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Color Image Processing
Last Time Pinhole camera model, projection
MRI Image Segmentation for Brain Injury Quantification Lindsay Kulkin 1 and Bir Bhanu 2 1 Department of Biomedical Engineering, Syracuse University, Syracuse,
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
EcoHarmony Team Members: Grady Laksmono, George Navarro, Yet Ming Chu Faculty Advisor: Dr. Eun-Young (Elaine) Kang Department of Computer Science College.
The plan for today Camera matrix
Highlights Lecture on the image part (10) Automatic Perception 16
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
EcoHarmony Team Members: Grady Laksmono, George Navarro, Yet Ming Chu Faculty Advisor: Dr. Eun-Young (Elaine) Kang Department of Computer Science College.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Digital Pathology Diagnostic Accuracy, Viewing Behavior and Image Characterization Linda Shapiro University of Washington Computer Science and Engineering.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
INTRODUCTION Problem: Damage condition of residential areas are more concerned than that of natural areas in post-hurricane damage assessment. Recognition.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Computer vision.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Machine Vision for Robots
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
Computer Vision Lecture 5. Clustering: Why and How.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Digital Image Processing CCS331 Relationships of Pixel 1.
1 Digital Image Processing Dr. Saad M. Saad Darwish Associate Prof. of computer science.
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
Computer Graphics and Image Processing (CIS-601).
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
1 Machine Vision. 2 VISION the most powerful sense.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Scale Invariant Feature Transform (SIFT)
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
CMA Coastline Matching Algorithm SSIP’99 - Project 10 Team H.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Digital Image Processing CCS331 Relationships of Pixel 1.
M ULTI -S ENSOR I MAGE F USION Sponsored by The Northrop Grumman Corporation Student Participants: Phu Kieu, Keenan Knaur, and Chris Kolodin Advisor: Dr.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
World Space. Latitude & Longitude  We can divide the world up into regions the same way we can divide Canada.  The world is divided into four hemispheres.
A Plane-Based Approach to Mondrian Stereo Matching
Signal and Image Processing Lab
Citizen Science Training Workshop
Image Segmentation – Edge Detection
Gait Analysis for Human Identification (GAHI)
ECE 692 – Advanced Topics in Computer Vision
Fall 2012 Longin Jan Latecki
Scale-Space Representation of 3D Models and Topological Matching
Introduction What IS computer vision?
Institute for Information Industry (III) Research Report
Region and Shape Extraction
Citizen Science Training Workshop
Physics Engine for Soft Bodies
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Problem Image and Volume Segmentation:
Presentation transcript:

Multi-Sensor Image Fusion (MSIF) Team Members: Phu Kieu, Keenan Knaur Faculty Advisor: Dr. Eun-Young (Elaine) Kang Northrop Grumman Liaison: Richard Gilmore Department of Computer Science College of Engineering, Computer Science, and Technology California State University, Los Angeles GOES-12 (West) Resolution: 2500x912 Scan Time: 5 min. GOES-11 (East) Resolution: 2000x850 Scan Time: 15 min. MSIF aims to fuse two images of the same scene from two space sensors. MSIF takes two space sensor images, as well as per pixel latitude and longitude information, and calculates the altitudes and velocities of clouds. SW development environment  Microsoft Visual C++  OpenCV Libraries: An open source library of programming functions for real time computer vision. 1.Input Data Preprocessing : The routine extracts the visual channel from the GOES satellite dataset 2.Equirectangular Map Projection: Both images are transformed into an equirectangular map project, which rectifies the images for matching algorithms. 3.Cloud Detection: K-means clustering and connected component analysis are used to detect clouds. 4.Matching and Disparity Extraction: Matching algorithms are used to find the disparity of the clouds. 5.Altitude and Velocity Calculation: The disparity is used to calculate the altitude of the cloud assuming that no movement has taken place between two images. 6.Data Visualization: A disparity map and a velocity-altitude graph are displayed for each cloud. Rectifies both images to a common coordinate system Results in equal distances between latitude and longitude lines Makes matching image features easier K-means clustering identifies cloud pixels based on intensity. Connected component analysis (CCA) identifies individual clouds Accuracy in the neighborhood of 85-90% Finds SHM feature vector per cloud Best match is when Euclidian distance of two vectors is minimum Finds bounding box of a cloud, and compares pixel intensities Best match is when MSD is at a minimum IntroductionAlgorithm Description 2. Equirectangular Projection West (Overlap Region)East (Overlap Region) 3. Cloud Identification West 4. Matching and Disparity Extraction Shape Histogram MatchingMean Squared Difference (MSD) 5. Altitude and Velocity Calculation 6. Data Visualization Altitude from Disparity Velocity from Altitude Construct vectors from satellites (S1, S2) to their ground intersect (G1 and G2) of the cloud Compute the intersection of two points (or the midpoint of the shortest line segment connecting the two vectors) Derive the altitude of the cloud from the intersection Alter blue to red (assume altitude) Construct vectors from satellites to red location Find the intersection of vectors with earth plane Compute distance of the two intersections and subtract from original disparity—this is the cloud motion Divide motion by time difference Clouds do not normally travel faster than 200 km/h. The possible altitudes of the cloud is narrowed down with this information. 1. Input Data ? ? Disparity Map West Intensities are the disparity magnitudes of clouds. The brighter, the larger.